This feature is no longer useful pods don't sync as often. For batch
creation/deletions/syncs, the cache will be up-to-date for most pods since it
will be updated frequently. For other cases, continue updating for two more
seconds don't usually help, as temporal locality doesn't hold across pod syncs.
If a pod worker sees stale pods from the runtime cache which were retrieved
before their last sync finished, it may think that the pod were not started
correctly, and attemp to fix that by killing/restarting containers.
There are two issues that may cause runtime cache to store stale pods:
1. The timstamp is recorded *after* getting the pods from the container
runtime. This may lead the consumer to think the pods are newer than they
actually are.
2. The cache updates are triggered by many goroutines (pod workers, and the
updating thread). There is no mechanism to enforece that the cache would
only be updated to newer pods.
This change fixes the above two issues by making sure one always record the
timestamp before getting pods from the container runtime, and updates the
cached pods only if the timestamp is newer.