This commit reimplements hack/e2e-suite/liveness.sh in Go as part of cmd/e2e.
Tested by running it on a live cluster:
$ cmd/e2e --host=https://w.x.y.z --provider=gce -t TestLivenessHttp -t TestLivenessExec
I0122 08:12:53.183298 6502 liveness.go:72] Restart count of pod liveness-exec-6f917474-a251-11e4-8cc2-d4ae52bb3eea increased from 0 to 1 during the test
I0122 08:13:23.605471 6502 liveness.go:72] Restart count of pod liveness-http-84d28569-a251-11e4-8cc2-d4ae52bb3eea increased from 0 to 1 during the test
Also ran the full e2e suite including kube-up/kube-down to confirm it works.
Some distros, include RHEL and Fedora, are doing away with the docker
socket by default in systemd units, for security reasons. Instead rely
on the docker.service being started instead of socket activation.
The list of valid paths is computed from http.ServeMux and
restful.WebService.
Adding a mux helper - wrapper over mux, that keeps track of the paths
handled by mux.
This implements phase 1 of the proposal in #3579, moving the creation
of the pods, RCs, and services to the master after the apiserver is
available.
This is such a wide commit because our existing initial config story
is special:
* Add kube-addons service and associated salt configuration:
** We configure /etc/kubernetes/addons to be a directory of objects
that are appropriately configured for the current cluster.
** "/etc/init.d/kube-addons start" slurps up everything in that dir.
(Most of the difficult is the business logic in salt around getting
that directory built at all.)
** We cheat and overlay cluster/addons into saltbase/salt/kube-addons
as config files for the kube-addons meta-service.
* Change .yaml.in files to salt templates
* Rename {setup,teardown}-{monitoring,logging} to
{setup,teardown}-{monitoring,logging}-firewall to properly reflect
their real purpose now (the purpose of these functions is now ONLY to
bring up the firewall rules, and possibly to relay the IP to the user).
* Rework GCE {setup,teardown}-{monitoring,logging}-firewall: Both
functions were improperly configuring global rules, yet used
lifecycles tied to the cluster. Use $NODE_INSTANCE_PREFIX with the
rule. The logging rule needed a $NETWORK specifier. The monitoring
rule tried gcloud describe first, but given the instancing, this feels
like a waste of time now.
* Plumb ENABLE_CLUSTER_MONITORING, ENABLE_CLUSTER_LOGGING,
ELASTICSEARCH_LOGGING_REPLICAS and DNS_REPLICAS down to the master,
since these are needed there now.
(Desperately want just a yaml or json file we can share between
providers that has all this crap. Maybe #3525 is an answer?)
Huge caveats: I've gone pretty firm testing on GCE, including
twiddling the env variables and making sure the objects I expect to
come up, come up. I've tested that it doesn't break GKE bringup
somehow. But I haven't had a chance to test the other providers.
Before this fix, the server version was printed from a pointer, making
the Go formatter prefix it with a &.
Before this patch:
$ kubectl version
Client Version: version.Info{Major:"0", Minor:"8+", GitVersion:"v0.8.0-509-g8537a73264b836", GitCommit:"8537a73264b836226cfca745ed37d65916e3b16f", GitTreeState:"clean"}
Server Version: &version.Info{Major:"0", Minor:"8+", GitVersion:"v0.8.0-509-g8537a73264b836", GitCommit:"8537a73264b836226cfca745ed37d65916e3b16f", GitTreeState:"clean"}
After this patch:
$ kubectl version
Client Version: version.Info{Major:"0", Minor:"8+", GitVersion:"v0.8.0-509-g8537a73264b836-dirty", GitCommit:"8537a73264b836226cfca745ed37d65916e3b16f", GitTreeState:"dirty"}
Server Version: version.Info{Major:"0", Minor:"8+", GitVersion:"v0.8.0-509-g8537a73264b836", GitCommit:"8537a73264b836226cfca745ed37d65916e3b16f", GitTreeState:"clean"}