In many cases clients may wish to view not ready addresses for endpoints
in order to do set membership prior to a pod being ready. For instance,
a pod that uses the service endpoints to connect to other pods under
the same service, but does not want to signal ready before it has
contacted at least a minimal number of other pods.
This is backwards compatible with old servers and clients. There is
an additional cost in size of endpoints before services ramp up, which
will add minor CPU and memory use for services that have a significant
number of pods which have not become ready.
Add an UpdateAcceptor interface to the rolling updater which supports
injecting code to validate the first replica during scale-up. If the
replica is not accepted, the deployment fails. This facilitates canary
checking so that many broken replicas aren't rolled out during an update.
Make the rolling update scale amount configurable as a percent of the replica
count; a negative value changes the scale direction to down/up to support
in-place deployments.
* Support configurable cleanup policies in RollingUpdater. Downstream
library consumers don't necessarily have the same rules for post
deployment cleanup; making the behavior policy driven is more flexible.
* Refactor RollingUpdater to accept a config object during Update instead
of a long argument list.
* Add test coverage for cleanup policy.
Use custom narrowly scoped interfaces for client access from the
RollingUpdater and Resizer. This allows for more flexible downstream
integration and unit testing without imposing a burden to implement
the entire client.Interface for just a handful of methods.
When kubectl does rolling updates of replication controllers, retry updates that
fail due to version mismatches (caused by concurrent updates by other clients).
These failed rolling updates were causing intermittent e2e test failures
(e.g. issue 5821)