When finishing a rolling update, a message is shown :
'Renaming oldRc to newRc'
In my case that'd be:
'Renaming nginx to nginx-df8d9fafc5007d83968cd37c8353d52e'
That naming order is incorrect, as the new name is temporary, and the final name should be the old one
Automatic merge from submit-queue
Add MinReadySeconds to rolling updater
Add MinReadySeconds support to RollingUpdater that allows to specify the number of seconds to wait on top of the pod is "ready" because its readiness probe passed.
Automatic merge from submit-queue
kubectl rolling-update support for same image
Fixes#23497.
Enables `kubectl rolling-update --image` to the same image, adding a `--image-pull-policy` flag to remove ambiguity. This allows rolling-update to behave as an "update and/or restart" (https://github.com/kubernetes/kubernetes/issues/23497#issuecomment-212349730), or as a forced update when the same tag can mean multiple versions (e.g. `:latest`). cc @janetkuo @nikhiljindal
Due to rounding down for maxUnavailable, we may end up with rolling updates
that have zero surge and unavailable pods something that 1) is not allowed
as per validation, 2) blocks updates. If we end up in such a situation
set maxUnavailable to 1 on the theory that surge might not work due to
quota.
During a rolling update for Deployments, the total count of surge pods
is calculated by adding the desired number of pods (deployment.Spec.Replicas)
to maxSurge. During a kubectl rolling update, the total count of surge
pods is calculated by adding the original number of pods (oldRc.Spec.Replicas
via an annotation) to maxSurge. This commit changes this to use desired
replicas.
Support a desired replica count of 0 for the new RC. Users sometimes
want to roll out a new "inactive" template with the intent of scaling
it up manually later.
Rolling back from a broken update with only one replica fails with a
timeout in the existing code.
The problem is the scale down logic does not consider unavailable
replicas in the old replication controller when calculating how much to
scale down by. This leads to an obvious problem with a single replica
when min unavailable is 1.
The fix is to allow scaling down all unavailable replicas in the old
controller, while still maintaining the min unavailable invariant.
Accept codec as parameter to CreateNewControllerFromCurrentController function. Add tests for performing a rolling update on a container in a multi-container pod.
All external types that are not int64 are now marked as int32,
including
IntOrString. Prober is now int32 (43 years should be enough of an initial
probe time for anyone).
Did not change the metadata fields for now.
Improve the rolling updater rollback/abort function by making it aware
of the original replicas annotation: if the rollback target has the
original replica count recorded, prefer it over the desired annotation
since the update from old to new could have been asymmetrical.
For example, when scaling from 5 to 10, aborting should scale back to 5.
In many cases clients may wish to view not ready addresses for endpoints
in order to do set membership prior to a pod being ready. For instance,
a pod that uses the service endpoints to connect to other pods under
the same service, but does not want to signal ready before it has
contacted at least a minimal number of other pods.
This is backwards compatible with old servers and clients. There is
an additional cost in size of endpoints before services ramp up, which
will add minor CPU and memory use for services that have a significant
number of pods which have not become ready.