Following from #27830, this copies the source onto the instance and
displays the location of it prominently (keeping the download link for
anyone that just wants to curl it).
Example output (this tag doesn't exist yet):
---
Welcome to Kubernetes v1.4.0!
You can find documentation for Kubernetes at:
http://docs.kubernetes.io/
The source for this release can be found at:
/usr/local/share/doc/kubernetes/kubernetes-src.tar.gz
Or you can download it at:
https://storage.googleapis.com/kubernetes-release/release/v1.4.0/kubernetes-src.tar.gz
It is based on the Kubernetes source at:
https://github.com/kubernetes/kubernetes/tree/v1.4.0
For Kubernetes copyright and licensing information, see:
/usr/local/share/doc/kubernetes/LICENSES
---
Following from #27830, this copies the source onto the instance and
displays the location of it prominently (keeping the download link for
anyone that just wants to curl it).
Example output (this tag doesn't exist yet):
---
Welcome to Kubernetes v1.4.0!
You can find documentation for Kubernetes at:
http://docs.kubernetes.io/
The source for this release can be found at:
/usr/local/share/doc/kubernetes/kubernetes-src.tar.gz
Or you can download it at:
https://storage.googleapis.com/kubernetes-release/release/v1.4.0/kubernetes-src.tar.gz
It is based on the Kubernetes source at:
https://github.com/kubernetes/kubernetes/tree/v1.4.0
For Kubernetes copyright and licensing information, see:
/usr/local/share/doc/kubernetes/LICENSES
---
This works around a linux kernel bug with overly aggressive caching of
ARP entries, which was causing problems when we reused IP addresses in
VPCs, for example with an ASG in a relatively small subnet.
See #23395 for more explanation.
Fixes#23395
When KUBE_E2E_STORAGE_TEST_ENVIRONMENT is set to 'true', kube-up.sh script
will:
- Install the right packages for all storage volumes.
- Use devicemapper as docker storage backend. 'aufs', the default one on
Debian, does not support extended attibutes required by Ceph RBD and Gluster
server containers.
Tested on GCE and Vagrant, e2e tests for storage volumes passes without any
additional configuration.
This ensures nfs-common is installed on GCE, and provides a more
functional explanation/example. I launched two replication controllers
so that there were busybox pods to poke around at the NFS volume, and
so that the later wget actually works (the original example would have
to work on the node, or need some other access to the container
network). After switching to two controllers, it actually makes more
sense to use PV claims, and it's probably a configuration that makes
more sense for indirection for NFS anyways.
We want to match the version of netcat that is installed on GCE. We
were having problems with netcat-openbsd having slightly different
timeout behaviour (on UDP packets; when there was no listener).
port-forward needs socat on the node hosts; we technically
don't need it today on the master, but this seems the right
place to put it, and socat is a small dependency.
- add appropriate server containers into contrib/for-tests/volumes-tester
- the tests are off by default (they need kubelet --allow_privileged=True)
- enable by 'go run hack/e2e.go ... --ginkgo.focus=Volume'
- add glusterfs tools to list of installed packages on each node
In particular, .gitignore, *.go, *.sls and etcd.conf are files that
should not be marked as executable.
Tested: built it with hack/build-go.sh, called all binaries with
the -version flag to confirm they work.
Signed-off-by: Filipe Brandenburger <filbranden@google.com>
Fixed up some scripts to be more robust. Changed the e2e test setup to use g1-small instances. Fixed up documentation to reflect the new script locations. Disabled the "curl | bash" cluster launch as it hasn't been well tested and doesn't include the cloudcfg tool yet.