- Configure the apiserver to listen securely on 443 instead of 6443.
- Configure the kubelet to connect to 443 instead of 6443.
- Update documentation to refer to bearer tokens instead of basic auth.
This variable can be entirely derived from grains.cloud, and it
simplifies the configuration somewhat. (Or someone convince me I'm
wrong. I'm happy to be wrong here.)
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.
Configure apiserver to serve Securely on port 6443.
Generate token for kubelets during master VM startup.
Put token into file apiserver can get and another file the kubelets can get.
Added e2e test.
apiserver becomes kube-apiserver
controller-manager -> kube-controller-manager
scheduler and proxy similarly.
Only thing I promise is that right now hack/build-go.sh and
build/release.sh exit with 0. That's it. Who knows if any of this
actually works....