There may be multiple security groups if we were using ELB, and
we have to delete them all apart from the default one, which EC2
prevents us from deleting.
Also use the same looping logic to clean up from partial up/downs.
Deletion is wonderful. The only weird thing was where to put the
message about the proxy URLs. Satnam suggested kubectl clusterinfo,
which seemed like a good option to put at the end of cluster turn-up.
This implements phase 1 of the proposal in #3579, moving the creation
of the pods, RCs, and services to the master after the apiserver is
available.
This is such a wide commit because our existing initial config story
is special:
* Add kube-addons service and associated salt configuration:
** We configure /etc/kubernetes/addons to be a directory of objects
that are appropriately configured for the current cluster.
** "/etc/init.d/kube-addons start" slurps up everything in that dir.
(Most of the difficult is the business logic in salt around getting
that directory built at all.)
** We cheat and overlay cluster/addons into saltbase/salt/kube-addons
as config files for the kube-addons meta-service.
* Change .yaml.in files to salt templates
* Rename {setup,teardown}-{monitoring,logging} to
{setup,teardown}-{monitoring,logging}-firewall to properly reflect
their real purpose now (the purpose of these functions is now ONLY to
bring up the firewall rules, and possibly to relay the IP to the user).
* Rework GCE {setup,teardown}-{monitoring,logging}-firewall: Both
functions were improperly configuring global rules, yet used
lifecycles tied to the cluster. Use $NODE_INSTANCE_PREFIX with the
rule. The logging rule needed a $NETWORK specifier. The monitoring
rule tried gcloud describe first, but given the instancing, this feels
like a waste of time now.
* Plumb ENABLE_CLUSTER_MONITORING, ENABLE_CLUSTER_LOGGING,
ELASTICSEARCH_LOGGING_REPLICAS and DNS_REPLICAS down to the master,
since these are needed there now.
(Desperately want just a yaml or json file we can share between
providers that has all this crap. Maybe #3525 is an answer?)
Huge caveats: I've gone pretty firm testing on GCE, including
twiddling the env variables and making sure the objects I expect to
come up, come up. I've tested that it doesn't break GKE bringup
somehow. But I haven't had a chance to test the other providers.
md5sum prints out the hash, followed by the filename. When piped in from
stdin, this equates to a '-' character.
cluster/aws/util.sh was incorrect including this '-' character as part
of the S3 bucket name, causing the script to fail on Linux machines with
the md5sum binary.
i.e. "s3://kubernetes-staging-0ac68d8c77915cc1069a9e2f5e1f1d2d -"
Fixed by using `awk` to return only the first column (up to the space)
Fix calling function before declaration
Set Name tags on instances
Hide import-key-pair error
Fix instances names resolution
Implement kube-down for AWS provider
Add cluster validation routines. Make changes according to #1255
Implement post-deployment cluster validation
Set proper master name in userdata scripts
Fix kube-down path in hint
Add getting started for AWS