We can't tag ASGs, but we can see what instances are running in an ASG,
and we can match those by our tags.
So look for our running instances, and look for the ASGs that created
them, and delete those.
This can be defeated (most notably if users change the ASG size to 0),
but it is safer that other deletion methods.
By setting KUBE_SHARE_MASTER=true we reuse an existing master, rather
than creating a new one.
By setting KUBE_SUBNET_CIDR=172.20.1.0/24 you can specify the CIDR for a
new subnet, avoiding conflicts.
Both these options are documented only in kube-up and clearly marked as
'experimental' i.e. likely to change.
By combining these, you can kube-up a cluster normally, and then kube-up
a cluster in a different AZ, and the new nodes will attach to the same
master.
KUBE_SHARE_MASTER is also useful for addding a second node
auto-scaling-group, for example if you wanted to mix spot & on-demand
instances.
Allows loading existing auth from kubeconfig on kube-up if a
valid KUBE_CONTEXT is specified, instead of always force
regenerating auth (basic or token) when creating a new cluster.
When KUBE_E2E_STORAGE_TEST_ENVIRONMENT is set to 'true', kube-up.sh script
will:
- Install the right packages for all storage volumes.
- Use devicemapper as docker storage backend. 'aufs', the default one on
Debian, does not support extended attibutes required by Ceph RBD and Gluster
server containers.
Tested on GCE and Vagrant, e2e tests for storage volumes passes without any
additional configuration.
We need this for some tests; not all the options are fully plumbed in,
but should enable experimental/v1alpha1, as needed for jobs tests.
In particular, ENABLE_NODE_AUTOSCALER is not yet actually implemented.
Similar to #15070, we should log the distro if we're going to tell the
user we can't match it (so the user can see if they have typoed it, and
so it will hopefully be included to us in error reports)
The current timeout of 5 seconds is needlessly short, given that we
fail kube-up if the (eventually consistent?) bucket creation takes
longer.
Raise it to 120 seconds.
Possibly related to issue #14278
OpenContrail is an open-source based networking software which provides virtualization support for the cloud.
This change-set adds ability to install and provision opencontrail software for networking in kubernetes based cloud environment.
There are basically 3 components
o kube-network-manager -- plugin between contrail components and kubernets components
o provision_master.sh -- OpenContrail software installer and provisioner in master node
o provision_minion.sh -- OpenContrail software installer and provisioner in minion node(s)
These are driven via salt configuration files
One can provision opencontrail by just setting "export NETWORK_PROVIDER=opencontrail"
Optionally, OPENCONTRAIL_TAG, and OPENCONTRAIL_KUBERNETES_TAG can be used to
specify opencontrail and contrail-kubernetes software versions to install and provision.
Public-IP Subnet provided by contrail can be configured via OPENCONTRAIL_PUBLIC_SUBNET
environment variable
At this moment, plan is to add support for aws, gce and vagrant based platforms
For more information on contrail-kubernetes, please visit https://github.com/juniper/contrail-kubernetes For more information on opencontrail, please visit http://www.opencontrail.org
Previously we would rely on the s3 bucket's region being configured
correctly, at least for the existence check. By querying for the bucket
region and then going direct to the correct region, we avoid errors and
we avoid potential eventual consistency problems.
May be related to issue: #12109
This is for people that want to run in a shared VPC/Subnet; while this should
work, we don't actively want to support it yet. So we don't block it,
but we don't document/encourage it either!
GCE does this in its per-provider scripts; this does the same for AWS and lets
other providers do the same; I believe kube2sky requires 10.0.0.1 as a SAN.
This is unfortunate, because it means we have two fingerprints,
although arguably the OpenSSH key fingerprint is much more common.
However, the OSX Mavericks version of ssh-keygen can't compute
the AWS fingerprint correctly (e.g. https://www.netmeister.org/blog/ssh2pkcs8.html)
So we work on OSX Mavericks, we use the more common OpenSSH fingerprint.