Ensure batch.Kind("Job") has a reaper, so that pods are not orphaned.
Check for orphaned pods in test-cmd.sh.
Also provide describer and scaler for batch.Kind("Job").
The scaler, reaper, and describer for extensions can
be reused for batch.
Move type LabelSelector and type LabelSelectorRequirement from pkg/apis/extensions
This avoids an import loop when Job (and later DaemonSet, Deployment, ReplicaSet)
are moved out of extensions to new api groups.
Also Move LabelSelectorAsSelector utility from pkg/apis/extensions/ to pkg/api/unversioned/
Also its test.
Also LabelSelectorOp* constants.
Also the pkg/apis/extensions/validation functions ValidateLabelSelectorRequirement and
ValidateLabelSelector move to pkg/api/unversioned
The related type in pkg/apis/extensions/v1beta1/ is staying there. I might move
it in another PR if neccessary.
Skip updating resources that already meet the desired replica count.
This change has an impact in both kubectl scale and kubectl delete in
that reapable resources that already have the desired replicas (number
provided via --replicas for scale, or zero for delete) won't be updated
again and a "already scaled" message will be printed (in case of scale).
When reaping a replication controller that handles an arbitrary amount of replicas, missing an interval means that `kubectl stop|delete` have to wait 3 more seconds to finish which seems more expensive from a user pov than a GET per 100ms.
Also I think `kubectl resize` should wait for all replicas to spin up before exiting. Feels more safe to know that your replication controller status is up to date.
Use custom narrowly scoped interfaces for client access from the
RollingUpdater and Resizer. This allows for more flexible downstream
integration and unit testing without imposing a burden to implement
the entire client.Interface for just a handful of methods.
When kubectl does rolling updates of replication controllers, retry updates that
fail due to version mismatches (caused by concurrent updates by other clients).
These failed rolling updates were causing intermittent e2e test failures
(e.g. issue 5821)