Allows loading existing auth from kubeconfig on kube-up if a
valid KUBE_CONTEXT is specified, instead of always force
regenerating auth (basic or token) when creating a new cluster.
kube-apiserver.service has 'ExecStartPre=/usr/bin/mkdir -p /var/lib/kube-apiserver', but if server is not fast enough 'mv /home/core/known_tokens.csv /var/lib/kube-apiserver/known_tokens.csv' will fail.
Kubernetes project has decided that it is better if kubelet
and kube-proxy use the apiserver REST interface to get and
set resources instead of accessing resource keys in etcd directly.
This is necessary to support kubelet reporting of events,
and also encapsulates the apiserver store details.
This means that the kubelet and kube-proxy need to know the
apiserver host(s) via a flag.
Since the Rackspace config already used etcd to advertise the
minions to the controller-manager, I used the same pattern to advertise
the apiserver(s) to the minions.
Setting --public_address_override=$private_ipv4 is intended to ensure that
the master serves its http interface on the right ethernet device, since I think
there are two on a droplet.
The new apiserver-advertiser.service puts the IPs of any apiservers in etcd.
The kubelet and kube-proxy now take an environment file which contains
the list of apiserver IPs, and that env var goes into a flag. The
etcd_servers argument is removed -- the point is for these binaries
to not access etcd.
The new apiserver-finder.service watches for changes in etcd and
restarts kubelet and kube proxy when there are new apiservers.
Deletion is wonderful. The only weird thing was where to put the
message about the proxy URLs. Satnam suggested kubectl clusterinfo,
which seemed like a good option to put at the end of cluster turn-up.
This implements phase 1 of the proposal in #3579, moving the creation
of the pods, RCs, and services to the master after the apiserver is
available.
This is such a wide commit because our existing initial config story
is special:
* Add kube-addons service and associated salt configuration:
** We configure /etc/kubernetes/addons to be a directory of objects
that are appropriately configured for the current cluster.
** "/etc/init.d/kube-addons start" slurps up everything in that dir.
(Most of the difficult is the business logic in salt around getting
that directory built at all.)
** We cheat and overlay cluster/addons into saltbase/salt/kube-addons
as config files for the kube-addons meta-service.
* Change .yaml.in files to salt templates
* Rename {setup,teardown}-{monitoring,logging} to
{setup,teardown}-{monitoring,logging}-firewall to properly reflect
their real purpose now (the purpose of these functions is now ONLY to
bring up the firewall rules, and possibly to relay the IP to the user).
* Rework GCE {setup,teardown}-{monitoring,logging}-firewall: Both
functions were improperly configuring global rules, yet used
lifecycles tied to the cluster. Use $NODE_INSTANCE_PREFIX with the
rule. The logging rule needed a $NETWORK specifier. The monitoring
rule tried gcloud describe first, but given the instancing, this feels
like a waste of time now.
* Plumb ENABLE_CLUSTER_MONITORING, ENABLE_CLUSTER_LOGGING,
ELASTICSEARCH_LOGGING_REPLICAS and DNS_REPLICAS down to the master,
since these are needed there now.
(Desperately want just a yaml or json file we can share between
providers that has all this crap. Maybe #3525 is an answer?)
Huge caveats: I've gone pretty firm testing on GCE, including
twiddling the env variables and making sure the objects I expect to
come up, come up. I've tested that it doesn't break GKE bringup
somehow. But I haven't had a chance to test the other providers.
apiserver becomes kube-apiserver
controller-manager -> kube-controller-manager
scheduler and proxy similarly.
Only thing I promise is that right now hack/build-go.sh and
build/release.sh exit with 0. That's it. Who knows if any of this
actually works....