Similar to #15070, we should log the distro if we're going to tell the
user we can't match it (so the user can see if they have typoed it, and
so it will hopefully be included to us in error reports)
The current timeout of 5 seconds is needlessly short, given that we
fail kube-up if the (eventually consistent?) bucket creation takes
longer.
Raise it to 120 seconds.
Possibly related to issue #14278
OpenContrail is an open-source based networking software which provides virtualization support for the cloud.
This change-set adds ability to install and provision opencontrail software for networking in kubernetes based cloud environment.
There are basically 3 components
o kube-network-manager -- plugin between contrail components and kubernets components
o provision_master.sh -- OpenContrail software installer and provisioner in master node
o provision_minion.sh -- OpenContrail software installer and provisioner in minion node(s)
These are driven via salt configuration files
One can provision opencontrail by just setting "export NETWORK_PROVIDER=opencontrail"
Optionally, OPENCONTRAIL_TAG, and OPENCONTRAIL_KUBERNETES_TAG can be used to
specify opencontrail and contrail-kubernetes software versions to install and provision.
Public-IP Subnet provided by contrail can be configured via OPENCONTRAIL_PUBLIC_SUBNET
environment variable
At this moment, plan is to add support for aws, gce and vagrant based platforms
For more information on contrail-kubernetes, please visit https://github.com/juniper/contrail-kubernetes For more information on opencontrail, please visit http://www.opencontrail.org
Previously we would rely on the s3 bucket's region being configured
correctly, at least for the existence check. By querying for the bucket
region and then going direct to the correct region, we avoid errors and
we avoid potential eventual consistency problems.
May be related to issue: #12109
This is for people that want to run in a shared VPC/Subnet; while this should
work, we don't actively want to support it yet. So we don't block it,
but we don't document/encourage it either!
GCE does this in its per-provider scripts; this does the same for AWS and lets
other providers do the same; I believe kube2sky requires 10.0.0.1 as a SAN.
This is unfortunate, because it means we have two fingerprints,
although arguably the OpenSSH key fingerprint is much more common.
However, the OSX Mavericks version of ssh-keygen can't compute
the AWS fingerprint correctly (e.g. https://www.netmeister.org/blog/ssh2pkcs8.html)
So we work on OSX Mavericks, we use the more common OpenSSH fingerprint.
For parity with GCE, we really want to support aufs.
But we previously supported btrfs, so we want to expose that.
Most of the work here is required for aufs, and we let advanced users choose
devicemapper/btrfs if they have a setup that works for those configurations.
Whenever we do a list we now filter on tags so we only see resources relating
to our cluster.
Also, rationalize all the DescribeX calls:
* They all take a request object (so that we can pass filters)
* They do paging if that is required (and return the underlying resources)
* They wrap any error with a "error while listing X: %v" message
Default behaviour when setting up a cluster is using the Amazon-assigned public ip.
It will change between reboots. If MASTER_RESERVED_IP is set to 'auto', new Elastic
IP will be allocated & assigned to master. If MASTER_RESERVED_IP is set to an existing
Elastic IP, it will be used. When something fails, original Amazon-given IP will be used.
When MASTER_RESERVED_IP is set to elastic IP from AWS, then aws/util.sh will
associate it with master instance and assign it to KUBE_MASTER_IP. If no MASTER_RESERVED_IP
is set, new elastic ip will be requested from amazon. This allows cluster certificates to
be generated for an IP that doesn't change between stopping & starting cluster instances.
The requested elastic ip is not released when kube-down.sh is run. I think it is good
because user could have created DNS records and it would be bad if the IP was removed.
He can reuse it next time through MASTER_RESERVED_IP when setting up cluster again.
The latest salt version breaks the container_bridge.py _state function
We can lock to the same version as GCE. This is not a full fix,
because we can't update to the latest salt without breaking GCE,
but this at least unbreaks and sync AWS with GCE.
This isn't a straight copy from GCE, because we still use
the salt master on AWS (for now)
Fixes#8114
We were specifying a region, but naming it as a zone in util.sh
The zone matters just as much as the region, e.g. for EBS volumes.
We also change the config to require a Zone, not a Region.
But we fallback to get the information from the metadata service.