The admission controller adds a default class to PVCs that do not require any
specific class. This way, users (=PVC authors) do not need to care about
storage classes, administrator can configure a default one and all these PVCs
that do not care about class will get the default one.
Automatic merge from submit-queue
Ubuntu: Enable ssh compression when downloading binaries during cluster creation
<!--
Checklist for submitting a Pull Request
Please remove this comment block before submitting.
1. Please read our [contributor guidelines](https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md).
2. See our [developer guide](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md).
3. If you want this PR to automatically close an issue when it is merged,
add `fixes #<issue number>` or `fixes #<issue number>, fixes #<issue number>`
to close multiple issues (see: https://github.com/blog/1506-closing-issues-via-pull-requests).
4. Follow the instructions for [labeling and writing a release note for this PR](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes) in the block below.
-->
resolves#20971 by using the options provided by ssh.
Native ssh compression has existed for years, and the server is free to disregard the setting, so this should be safe.
With things like the kube binaries I see about a 2x speed increase.
```
λ time scp kubes-bin.tar 9.30.182.251:/mnt/build/kubin
kubes-bin.tar 100% 344MB 10.7MB/s 00:32
real 0m32.284s
user 0m1.679s
sys 0m1.263s
λ time scp -C kubes-bin.tar 9.30.182.251:/mnt/build/kubin
kubes-bin.tar 100% 344MB 22.9MB/s 00:15
real 0m14.810s
user 0m12.858s
sys 0m0.994s
λ ls -lah kubes-bin.tar
-rw-r--r-- 1 mhb staff 344M Jun 2 15:29 kubes-bin.tar
λ tar -tf kubes-bin.tar
kubectl
master/
master/etcd
master/etcdctl
master/flanneld
master/kube-apiserver
master/kube-controller-manager
master/kube-scheduler
node/
node/flanneld
node/kube-proxy
node/kubelet
```
The UI didn't work with vSphere kube-up implementation. This fixes
that by making the following changes:
* Configure the apiserver with admission controls, especially
ServiceAccount. This will provide the token to the dashboard pod
that it needs to talk to the apiserver. This will also improve other
pods that require service accounts.
* Add routes to the master so it can communicate with the pods, so
hitting the https://MASTER/ui URL will allow it to contact the
pods.
* Add an extra subject for the cluster IP to the apiserver, so when
the dashboard communicates with the apiserver, the certificate
matches the IP address it's using.
Allows loading existing auth from kubeconfig on kube-up if a
valid KUBE_CONTEXT is specified, instead of always force
regenerating auth (basic or token) when creating a new cluster.
When KUBE_E2E_STORAGE_TEST_ENVIRONMENT is set to 'true', kube-up.sh script
will:
- Install the right packages for all storage volumes.
- Use devicemapper as docker storage backend. 'aufs', the default one on
Debian, does not support extended attibutes required by Ceph RBD and Gluster
server containers.
Tested on GCE and Vagrant, e2e tests for storage volumes passes without any
additional configuration.
Deletion is wonderful. The only weird thing was where to put the
message about the proxy URLs. Satnam suggested kubectl clusterinfo,
which seemed like a good option to put at the end of cluster turn-up.
This implements phase 1 of the proposal in #3579, moving the creation
of the pods, RCs, and services to the master after the apiserver is
available.
This is such a wide commit because our existing initial config story
is special:
* Add kube-addons service and associated salt configuration:
** We configure /etc/kubernetes/addons to be a directory of objects
that are appropriately configured for the current cluster.
** "/etc/init.d/kube-addons start" slurps up everything in that dir.
(Most of the difficult is the business logic in salt around getting
that directory built at all.)
** We cheat and overlay cluster/addons into saltbase/salt/kube-addons
as config files for the kube-addons meta-service.
* Change .yaml.in files to salt templates
* Rename {setup,teardown}-{monitoring,logging} to
{setup,teardown}-{monitoring,logging}-firewall to properly reflect
their real purpose now (the purpose of these functions is now ONLY to
bring up the firewall rules, and possibly to relay the IP to the user).
* Rework GCE {setup,teardown}-{monitoring,logging}-firewall: Both
functions were improperly configuring global rules, yet used
lifecycles tied to the cluster. Use $NODE_INSTANCE_PREFIX with the
rule. The logging rule needed a $NETWORK specifier. The monitoring
rule tried gcloud describe first, but given the instancing, this feels
like a waste of time now.
* Plumb ENABLE_CLUSTER_MONITORING, ENABLE_CLUSTER_LOGGING,
ELASTICSEARCH_LOGGING_REPLICAS and DNS_REPLICAS down to the master,
since these are needed there now.
(Desperately want just a yaml or json file we can share between
providers that has all this crap. Maybe #3525 is an answer?)
Huge caveats: I've gone pretty firm testing on GCE, including
twiddling the env variables and making sure the objects I expect to
come up, come up. I've tested that it doesn't break GKE bringup
somehow. But I haven't had a chance to test the other providers.