separated from the apiserver running locally on the master node so that it
can be optionally enabled or disabled as needed.
Also, fix the healthchecking configuration for the master components, which
was previously only working by coincidence:
If a kubelet doesn't register with a master, it never bothers to figure out
what its local address is. In which case it ends up constructing a URL like
http://:8080/healthz for the http probe. This happens to work on the master
because all of the pods are using host networking and explicitly binding to
127.0.0.1. Once the kubelet is registered with the master and it determines
the local node address, it tries to healthcheck on an address where the pod
isn't listening and the kubelet periodically restarts each master component
when the liveness probe fails.
external load balancers up-to-date based on the service's specs, using
the new DeltaFIFO watch queue class. Remove the old registry REST
handler code for creating/updating/deleting load balancers.
Also clean up a bunch of the GCE cloudprovider code related to load balancers.
This variable can be entirely derived from grains.cloud, and it
simplifies the configuration somewhat. (Or someone convince me I'm
wrong. I'm happy to be wrong here.)
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
--sync_nodes=false gives user flexibility to add/remove nodes in the
cluster using REST api/kubectl cli and at the same time can use
cloud provider for other resources like persistent disks, etc.
apiserver becomes kube-apiserver
controller-manager -> kube-controller-manager
scheduler and proxy similarly.
Only thing I promise is that right now hack/build-go.sh and
build/release.sh exit with 0. That's it. Who knows if any of this
actually works....