all remaining scalers were replaced by GenericScaler exept JobScaler.
It is not clear whether JobScaler could use generic scaler or not.
For more details see the pull request.
note that we don't change the behaviour of kubectl.
For example it won't scale new resources. That's the end goal.
The first step is to retrofit existing code to use the generic scaler.
Automatic merge from submit-queue (batch tested with PRs 50378, 51463, 50006, 51962, 51673). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
ignore unknown resource version in scaler error
**Release note**:
```release-note
NONE
```
Rather than printing `Scaling the resource failed with: An Error; Current resource version Unknown` whenever a ScalerError occurs and a resource version is not known, we should avoid printing the resource version part of the error message in order to avoid potential confusion by a user.
Related downstream comment: https://github.com/openshift/origin/issues/16056#issuecomment-326049457
cc @fabianofranz @soltysh @stevekuznetsov @kubernetes/sig-cli-misc
Automatic merge from submit-queue
kubectl: ignore only update conflicts in the scaler
@kubernetes/kubectl is there any reason to retry any other errors?
Ensure batch.Kind("Job") has a reaper, so that pods are not orphaned.
Check for orphaned pods in test-cmd.sh.
Also provide describer and scaler for batch.Kind("Job").
The scaler, reaper, and describer for extensions can
be reused for batch.
Update the Deployments' API types, defaulting code, conversions, helpers
and validation to use ReplicaSets instead of ReplicationControllers and
LabelSelector instead of map[string]string for selectors.
Also update the Deployment controller, registry, kubectl subcommands,
client listers package and e2e tests to use ReplicaSets and
LabelSelector for Deployments.
Skip updating resources that already meet the desired replica count.
This change has an impact in both kubectl scale and kubectl delete in
that reapable resources that already have the desired replicas (number
provided via --replicas for scale, or zero for delete) won't be updated
again and a "already scaled" message will be printed (in case of scale).
This commit adds support for using kubectl scale to scale deployments. Makes use of the
deployments/scale endpoint instead of updating deployment.spec.replicas directly.