Commit Graph

12 Commits (3841690dbf100f89875a6cac0fed7eb79eea62fa)

Author SHA1 Message Date
deads2k 6a4d5cd7cc start the apimachinery repo 2017-01-11 09:09:48 -05:00
Jeff Grafton 20d221f75c Enable auto-generating sources rules 2017-01-05 14:14:13 -08:00
Mike Danese 161c391f44 autogenerated 2016-12-29 13:04:10 -08:00
Mike Danese c87de85347 autoupdate BUILD files 2016-12-12 13:30:07 -08:00
Mike Danese 3b6a067afc autogenerated 2016-10-21 17:32:32 -07:00
Harry Zhang cb14b35bde Refactor util clock into it's own pkg 2016-07-28 02:29:04 -04:00
David McMahon ef0c9f0c5b Remove "All rights reserved" from all the headers. 2016-06-29 17:47:36 -07:00
Random-Liu 4cca5b2290 Use fake clock in TestGetPodsToSync to fix flake. 2016-05-02 16:05:36 -07:00
laushinka 7ef585be22 Spelling fixes inspired by github.com/client9/misspell 2016-02-18 06:58:05 +07:00
Daniel Smith 4a7d70aef1 extend fake clock 2016-02-01 15:36:15 -08:00
Yu-Ju Hong 4ab505606b Always overwrite items in kubelet's work queue
This allows kubelet to change the next sync time based on the last result.
2016-01-12 16:25:19 -08:00
Yu-Ju Hong 2eb17df46b kubelet: independent pod syncs and backoff on error
Currently kubelet syncs all pods every 10s. This is not preferred because
 * Some pods may have been sync'd recently.
 * This may cause all the pods to be sync'd at once, causing undesirable
   CPU spikes.

This PR replaces the global syncs with independent, periodic pod syncs. At the
end of syncing, each pod worker will enqueue itslef with a future timestamp (
current time + sync interval), when it will be due for another sync.
 * If the pod worker encoutners an sync error, it may requeue with a different
   timestamp to retry sooner.
 * If a sync is triggered by the update channel (events or spec changes), the
   pod worker would enqueue a new sync time.

This change is necessary for moving to long or no periodic sync period once pod
lifecycle event generator is completed. We will still rely on the mechanism to
requeue the pod on sync error.

This change also makes sure that if a sync does not succeed (either due to
real error or the per-container backoff mechanism), an error would be propagated
back to the pod worker, which is responsible for requeuing.
2015-11-03 13:29:08 -08:00