Automatic merge from submit-queue
Add MinReadySeconds to rolling updater
Add MinReadySeconds support to RollingUpdater that allows to specify the number of seconds to wait on top of the pod is "ready" because its readiness probe passed.
Automatic merge from submit-queue
kubectl rolling-update support for same image
Fixes#23497.
Enables `kubectl rolling-update --image` to the same image, adding a `--image-pull-policy` flag to remove ambiguity. This allows rolling-update to behave as an "update and/or restart" (https://github.com/kubernetes/kubernetes/issues/23497#issuecomment-212349730), or as a forced update when the same tag can mean multiple versions (e.g. `:latest`). cc @janetkuo @nikhiljindal
Due to rounding down for maxUnavailable, we may end up with rolling updates
that have zero surge and unavailable pods something that 1) is not allowed
as per validation, 2) blocks updates. If we end up in such a situation
set maxUnavailable to 1 on the theory that surge might not work due to
quota.
During a rolling update for Deployments, the total count of surge pods
is calculated by adding the desired number of pods (deployment.Spec.Replicas)
to maxSurge. During a kubectl rolling update, the total count of surge
pods is calculated by adding the original number of pods (oldRc.Spec.Replicas
via an annotation) to maxSurge. This commit changes this to use desired
replicas.
Support a desired replica count of 0 for the new RC. Users sometimes
want to roll out a new "inactive" template with the intent of scaling
it up manually later.
Rolling back from a broken update with only one replica fails with a
timeout in the existing code.
The problem is the scale down logic does not consider unavailable
replicas in the old replication controller when calculating how much to
scale down by. This leads to an obvious problem with a single replica
when min unavailable is 1.
The fix is to allow scaling down all unavailable replicas in the old
controller, while still maintaining the min unavailable invariant.
Accept codec as parameter to CreateNewControllerFromCurrentController function. Add tests for performing a rolling update on a container in a multi-container pod.
All external types that are not int64 are now marked as int32,
including
IntOrString. Prober is now int32 (43 years should be enough of an initial
probe time for anyone).
Did not change the metadata fields for now.
Improve the rolling updater rollback/abort function by making it aware
of the original replicas annotation: if the rollback target has the
original replica count recorded, prefer it over the desired annotation
since the update from old to new could have been asymmetrical.
For example, when scaling from 5 to 10, aborting should scale back to 5.
In many cases clients may wish to view not ready addresses for endpoints
in order to do set membership prior to a pod being ready. For instance,
a pod that uses the service endpoints to connect to other pods under
the same service, but does not want to signal ready before it has
contacted at least a minimal number of other pods.
This is backwards compatible with old servers and clients. There is
an additional cost in size of endpoints before services ramp up, which
will add minor CPU and memory use for services that have a significant
number of pods which have not become ready.
Add an UpdateAcceptor interface to the rolling updater which supports
injecting code to validate the first replica during scale-up. If the
replica is not accepted, the deployment fails. This facilitates canary
checking so that many broken replicas aren't rolled out during an update.
Make the rolling update scale amount configurable as a percent of the replica
count; a negative value changes the scale direction to down/up to support
in-place deployments.
* Support configurable cleanup policies in RollingUpdater. Downstream
library consumers don't necessarily have the same rules for post
deployment cleanup; making the behavior policy driven is more flexible.
* Refactor RollingUpdater to accept a config object during Update instead
of a long argument list.
* Add test coverage for cleanup policy.
Use custom narrowly scoped interfaces for client access from the
RollingUpdater and Resizer. This allows for more flexible downstream
integration and unit testing without imposing a burden to implement
the entire client.Interface for just a handful of methods.
When kubectl does rolling updates of replication controllers, retry updates that
fail due to version mismatches (caused by concurrent updates by other clients).
These failed rolling updates were causing intermittent e2e test failures
(e.g. issue 5821)