Automatic merge from submit-queue
Add MinReadySeconds to rolling updater
Add MinReadySeconds support to RollingUpdater that allows to specify the number of seconds to wait on top of the pod is "ready" because its readiness probe passed.
Automatic merge from submit-queue
kubectl rolling-update support for same image
Fixes#23497.
Enables `kubectl rolling-update --image` to the same image, adding a `--image-pull-policy` flag to remove ambiguity. This allows rolling-update to behave as an "update and/or restart" (https://github.com/kubernetes/kubernetes/issues/23497#issuecomment-212349730), or as a forced update when the same tag can mean multiple versions (e.g. `:latest`). cc @janetkuo @nikhiljindal
Due to rounding down for maxUnavailable, we may end up with rolling updates
that have zero surge and unavailable pods something that 1) is not allowed
as per validation, 2) blocks updates. If we end up in such a situation
set maxUnavailable to 1 on the theory that surge might not work due to
quota.
During a rolling update for Deployments, the total count of surge pods
is calculated by adding the desired number of pods (deployment.Spec.Replicas)
to maxSurge. During a kubectl rolling update, the total count of surge
pods is calculated by adding the original number of pods (oldRc.Spec.Replicas
via an annotation) to maxSurge. This commit changes this to use desired
replicas.
Combine the fields that will be used for content transformation
(content-type, codec, and group version) into a single struct in client,
and then pass that struct into the rest client and request. Set the
content-type when sending requests to the server, and accept the content
type as primary.
Will form the foundation for content-negotiation via the client.
Support a desired replica count of 0 for the new RC. Users sometimes
want to roll out a new "inactive" template with the intent of scaling
it up manually later.
Rolling back from a broken update with only one replica fails with a
timeout in the existing code.
The problem is the scale down logic does not consider unavailable
replicas in the old replication controller when calculating how much to
scale down by. This leads to an obvious problem with a single replica
when min unavailable is 1.
The fix is to allow scaling down all unavailable replicas in the old
controller, while still maintaining the min unavailable invariant.
The pending codec -> conversion split changes the signature of
Encode and Decode to be more complicated. Create a stub helper
with the exact semantics of today and do the simple mechanical
refactor here to reduce the cost of that change.
Accept codec as parameter to CreateNewControllerFromCurrentController function. Add tests for performing a rolling update on a container in a multi-container pod.
A lot of packages use StringSet, but they don't use anything else from
the util package. Moving StringSet into another package will shrink
their dependency trees significantly.