Split arguments to be passed to cluster autoscaler binary,
so each argument is passed separately.
This is preparatory work for migrating CA to disroless base image
and passing multiple arguments together does not work if CA is
not wrapped around with shell script
Change-Id: I26b5a764d2a12079c7f4ed6633ccabf8d623e232
Automatic merge from submit-queue (batch tested with PRs 64503, 64903, 64643, 64987). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Create system:cluster-autoscaler account & role and introduce it to C…
**What this PR does / why we need it**:
This PR adds cluster-autoscaler ClusterRole & binding, to be used by the Cluster Autoscaler (kubernetes/autoscaler repository).
It also updates GCE scripts to make CA use the cluster-autoscaler user account.
User account instead of Service account is chosen to be more in line with kube-scheduler.
**Which issue(s) this PR fixes**:
Fixes [issue 383](https://github.com/kubernetes/autoscaler/issues/383) from kubernetes/autoscaler.
**Special notes for your reviewer**:
This PR might be treated as a security fix since prior to it CA on GCE was using system:cluster-admin account, assumed due to default handling of unsecured & unauthenticated traffic over plain HTTP.
**Release note**:
```release-note
A cluster-autoscaler ClusterRole is added to cover only the functionality required by Cluster Autoscaler and avoid abusing system:cluster-admin role.
action required: Cloud providers other than GCE might want to update their deployments or sample yaml files to reuse the role created via add-on.
```
This is the 2nd attempt. The previous was reverted while we figured out
the regional mirrors (oops).
New plan: k8s.gcr.io is a read-only facade that auto-detects your source
region (us, eu, or asia for now) and pulls from the closest. To publish
an image, push k8s-staging.gcr.io and it will be synced to the regionals
automatically (similar to today). For now the staging is an alias to
gcr.io/google_containers (the legacy URL).
When we move off of google-owned projects (working on it), then we just
do a one-time sync, and change the google-internal config, and nobody
outside should notice.
We can, in parallel, change the auto-sync into a manual sync - send a PR
to "promote" something from staging, and a bot activates it. Nice and
visible, easy to keep track of.
Automatic merge from submit-queue (batch tested with PRs 55439, 58564, 59028, 59169, 59259). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
cluster: delete lot's of stuff
let me know if any of this is too aggressive.
see #49213
```release-note
Remove unmaintained kube-registry-proxy support from gce kube-up.
```
basically just:
* move all manifests into the new gce/manifests dir
* move limit-range into gce/addons/limit-range
* move abac jsonl into gce/manifests. this is gross but we will
hopefully be able to delete this config soon. it only exists to support
a deprecated feature.
* fix build, release, deploy to look for everything in its new home