This implements phase 1 of the proposal in #3579, moving the creation
of the pods, RCs, and services to the master after the apiserver is
available.
This is such a wide commit because our existing initial config story
is special:
* Add kube-addons service and associated salt configuration:
** We configure /etc/kubernetes/addons to be a directory of objects
that are appropriately configured for the current cluster.
** "/etc/init.d/kube-addons start" slurps up everything in that dir.
(Most of the difficult is the business logic in salt around getting
that directory built at all.)
** We cheat and overlay cluster/addons into saltbase/salt/kube-addons
as config files for the kube-addons meta-service.
* Change .yaml.in files to salt templates
* Rename {setup,teardown}-{monitoring,logging} to
{setup,teardown}-{monitoring,logging}-firewall to properly reflect
their real purpose now (the purpose of these functions is now ONLY to
bring up the firewall rules, and possibly to relay the IP to the user).
* Rework GCE {setup,teardown}-{monitoring,logging}-firewall: Both
functions were improperly configuring global rules, yet used
lifecycles tied to the cluster. Use $NODE_INSTANCE_PREFIX with the
rule. The logging rule needed a $NETWORK specifier. The monitoring
rule tried gcloud describe first, but given the instancing, this feels
like a waste of time now.
* Plumb ENABLE_CLUSTER_MONITORING, ENABLE_CLUSTER_LOGGING,
ELASTICSEARCH_LOGGING_REPLICAS and DNS_REPLICAS down to the master,
since these are needed there now.
(Desperately want just a yaml or json file we can share between
providers that has all this crap. Maybe #3525 is an answer?)
Huge caveats: I've gone pretty firm testing on GCE, including
twiddling the env variables and making sure the objects I expect to
come up, come up. I've tested that it doesn't break GKE bringup
somehow. But I haven't had a chance to test the other providers.
This is essentially a variant of push-ci-build.sh, but pushes it to
the current project. The defaults for gcs::release actually pick a
shorthash of the GCS project, so you end up uploading to something
like: gs://kubernetes-releases-3fda2/devel/v0.8.0-437-g7f147ed/ (where
the last part is the "git describe" of your current commit)
This pushes artifacts in a similar manner to the official release,
except that instead of release/vFOO, it goes to ci/$(git describe),
e.g.: gs://kubernetes-release/ci/v0.7.0-315-gcae5722
It also pushes a text file to gs://kubernetes-release/ci/latest.txt,
so anyone can do, for instance:
gsutil ls gs://kubernetes-release/ci/$(gsutil cat gs://kubernetes-release/ci/latest.txt)
(In a parallel change, I'm going to flip the jenkins scripts over to
use git describe, since it's shorter and a little more descriptive)
Add test artifacts to the build. This lets you do:
tar -xzf kubernetes.tar.gz
tar -xzf kubernetes-test.tar.gz
cd kubernetes
go run ./hack/e2e.go -up -test -down
without having a git checkout.
This commit brings two main changes, notably:
Two new options that can be set as environment variables
- DOCKER_OPTS: any arbitrary set of docker options. Example: --tlsverify
- DOCKER_NATIVE: This forces the use of the native docker available.
This is most useful if you're on OS X and do not want
to use boot2docker.
Now uses 'docker cp' instead of tar piping to transfer files. This
currently must be done by copying the binaries off of the docker volume
and into a local filesystem (/tmp) before a docker cp is done. This
workaround will no longer be necessary after bug fix
https://github.com/docker/docker/pull/8509 makes it into stable.
This was necessary because the tar | tar method was creating corrupted
archives on OS X even with the < /dev/null workaround.
apiserver becomes kube-apiserver
controller-manager -> kube-controller-manager
scheduler and proxy similarly.
Only thing I promise is that right now hack/build-go.sh and
build/release.sh exit with 0. That's it. Who knows if any of this
actually works....
If the failure is a problem, the build will fail later. But it is
possible that this is not a fatal issue and we should let things go
forward. (a filesystem mounted with context=something in permissive
would cause chcon to fail, but the build to work)
* Rewrite a bunch of the hack/ directory with modular reusable bash libraries.
* Have 'build/*' build on 'hack/*'. The stuff in build now just runs hack/* in a docker container.
* Use a docker data container to enable faster incremental builds.
* Standardize output to _output/{local,dockerized}/bin/OS/ARCH/*. This regularized placement makes cross compilation work.
* Move travis specific scripts under hack/travis
With new dockerized incremental builds, I can do a no-op `make quick-release` in ~30s. This is a significant improvement.
If you have two repos that are both building at the same time we don't want to have them stomp on each other as they deal with docker. Work around this by hashing the KUBE_ROOT and mixing that in to the name.
Also a bunch of script clean up. Make make-clean.sh faster and more robust.
Building the docker run images and uploading to GCS are now optional and turned off by default. Doing a release shouldn't hit the network now (I think).
Currently binaries are built using Go 1.2.2, which results
in larger binaries than those produced by newer versions of
Go. The Go source archive used for the build process is not
verified against its SHA1 hash.
Update the build-image Dockerfile to use Go 1.3 to build all
binaries, as a result binaries are now 20% - 30% smaller. The
Go source archive used for building binaries is now verified
against its SHA1 hash.
The pause image is a 240KB image that simply pauses waiting on a signal.
Use this for the net container which only needs to act as a placeholder.
Current net image is ~2.5MB. From my tests, this reduces startup time
for the net container from ~14s to ~6s.