GCE does this in its per-provider scripts; this does the same for AWS and lets
other providers do the same; I believe kube2sky requires 10.0.0.1 as a SAN.
This is unfortunate, because it means we have two fingerprints,
although arguably the OpenSSH key fingerprint is much more common.
However, the OSX Mavericks version of ssh-keygen can't compute
the AWS fingerprint correctly (e.g. https://www.netmeister.org/blog/ssh2pkcs8.html)
So we work on OSX Mavericks, we use the more common OpenSSH fingerprint.
For parity with GCE, we really want to support aufs.
But we previously supported btrfs, so we want to expose that.
Most of the work here is required for aufs, and we let advanced users choose
devicemapper/btrfs if they have a setup that works for those configurations.
Whenever we do a list we now filter on tags so we only see resources relating
to our cluster.
Also, rationalize all the DescribeX calls:
* They all take a request object (so that we can pass filters)
* They do paging if that is required (and return the underlying resources)
* They wrap any error with a "error while listing X: %v" message
Default behaviour when setting up a cluster is using the Amazon-assigned public ip.
It will change between reboots. If MASTER_RESERVED_IP is set to 'auto', new Elastic
IP will be allocated & assigned to master. If MASTER_RESERVED_IP is set to an existing
Elastic IP, it will be used. When something fails, original Amazon-given IP will be used.
When MASTER_RESERVED_IP is set to elastic IP from AWS, then aws/util.sh will
associate it with master instance and assign it to KUBE_MASTER_IP. If no MASTER_RESERVED_IP
is set, new elastic ip will be requested from amazon. This allows cluster certificates to
be generated for an IP that doesn't change between stopping & starting cluster instances.
The requested elastic ip is not released when kube-down.sh is run. I think it is good
because user could have created DNS records and it would be bad if the IP was removed.
He can reuse it next time through MASTER_RESERVED_IP when setting up cluster again.
The latest salt version breaks the container_bridge.py _state function
We can lock to the same version as GCE. This is not a full fix,
because we can't update to the latest salt without breaking GCE,
but this at least unbreaks and sync AWS with GCE.
This isn't a straight copy from GCE, because we still use
the salt master on AWS (for now)
Fixes#8114