Not exactly sure why hack/e2e.go IsUp() is returning true right now,
but I can solve this a different way. This unifies with the GCE
behavior, which is that no-op kube-down returns 0.
Automatic merge from submit-queue
Making DHCP_OPTION_SET_ID creation optional
Reason: We have a pre-configured VPC in AWS. `kube-up.sh` should not making changes to the VPC DHCP option if there's already DHCP options configured.
PR Changes: When `DHCP_OPTION_SET_ID` is given in environment variable, kube-up.sh will skip the `DHCP_OPTION_SET_ID` creation.
Automatic merge from submit-queue
cluster/aws: Add option for kubeconfig context
Added KUBE_CONFIG_CONTEXT environment variable to customize the kubeconfig context created at the end of the aws kube-up script.
Fixes#24877
This PR does barely anything and shouldn't require e2e tests. It's just a minor convenience.
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/24910)
<!-- Reviewable:end -->
Automatic merge from submit-queue
AWS kube-up: move to Docker 1.11.2
This is to mirror GCE
Also we remove support for vivid as Docker no longer packages for it, and remove some of the unreachable distro code in aws kube-up.
Also bump the AMI to a 1.3 version (with preinstalled Docker 1.11.2)
Fixes https://github.com/kubernetes/kubernetes/issues/27654
Vivid is EOL, and Docker is no longer packaged for it.
Remove support for it in 1.3 (in 1.2 we had warned users it was EOL).
Also remove unused wheezy, trusty & coreos & do general cleanup.
Automatic merge from submit-queue
AWS: More support for ap-northeast-2 region
Issue #24446
The new AWS region for Seoul, Korea (ap-northeast-2)
was launched in January 2016
https://aws.amazon.com/blogs/aws/now-open-aws-asia-pacific-seoul-region/
But it requires a few changes.
To test:
```
export KUBERNETES_PROVIDER=aws
export KUBE_AWS_ZONE=ap-northeast-2a
export MASTER_SIZE=t2.medium
export NODE_SIZE=t2.medium
export NUM_NODES=4
cluster/kube-up.sh
```
I assigned the AMIs by checking the specific version used from `ap-northeast-1`,
and finding the same image with the same datestamp.
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/24464)
<!-- Reviewable:end -->
Apparently our cluster start time increased, to the point where users
are reporting spurious timeouts (#23623) and users are reporting that
increasing the timeout fixes the issue (thanks @paralin for the
suggestion and @jlfields for confirming).
Fix#23623
Added KUBE_CONFIG_CONTEXT environment variable to customize the
kubeconfig context created at the end of the aws kube-up script.
Signed-off-by: Christian Stewart <christian@paral.in>
We rename it to EPHEMERAL_BLOCK_DEVICE_MAPPINGS, and we also change the value
so that it starts with a `,`, instead of always inserting a comma before it.
In this way the value can be empty.
Also, if the user sets the (currently experimental) KUBE_AWS_STORAGE
environment variable to be "ebs", then we will not mount any instance storage
which will cause the machines to use EBS storage instead.
If we deleted an ELB, we often fail to delete the security group,
because deleting the ELB is invisibly asynchronous.
Add a retry loop around delete-security-group to work around this.
Fix#21147
The only tested-working distros are vivid, wily & jessie.
vivid should not really be used because it is no longer supported, so
recommend wily or jessie instead.
For other distros, recommend jessie instead.
Fix#21218
Default distro is jessie, due to the support situation with Ubuntu
distros. Default ubuntu distro is wily.
Update the docs to reflect the recommended distros with kube-up, and to
encourage contributions for other distros.
Spot instances take a lot longer to run; wait up to 15 minutes for the
nodes to launch when we're using spot instances. (Previously we were
waiting 5 minutes).
Once we've built the master, we can build kubeconfig. By doing so, if
we time out waiting for the nodes, the system is still configured
correctly.
In particular, spot instances can be slow to launch.
Related to issue #21200
I think we should probably leave this undocumented for now, until we
have a better way to launch multiple sets of nodes, but it's great for
cost savings while testing!
Fix#21200