When reaping a replication controller that handles an arbitrary amount of replicas, missing an interval means that `kubectl stop|delete` have to wait 3 more seconds to finish which seems more expensive from a user pov than a GET per 100ms.
Also I think `kubectl resize` should wait for all replicas to spin up before exiting. Feels more safe to know that your replication controller status is up to date.
Use custom narrowly scoped interfaces for client access from the
RollingUpdater and Resizer. This allows for more flexible downstream
integration and unit testing without imposing a burden to implement
the entire client.Interface for just a handful of methods.
When kubectl does rolling updates of replication controllers, retry updates that
fail due to version mismatches (caused by concurrent updates by other clients).
These failed rolling updates were causing intermittent e2e test failures
(e.g. issue 5821)