This registry can be accessed through proxies that run on each node
listening on port 5000. We send the proxy images to the nodes directly
to avoid requests that hit the network during cluster launch. For now,
we continue to pull the registry itself over the network, especially
given its large size (we should be able to dramatically shrink the
image). On GCE we create a PD and use that for storage, otherwise we
use an emptyDir. The registry is not enabled outside of GCE. All
communication is currently plain HTTP. In order to use SSL, we will
need to be able to request a certificate/key from the apiserver signed
by the apiserver's CA cert.
separated from the apiserver running locally on the master node so that it
can be optionally enabled or disabled as needed.
Also, fix the healthchecking configuration for the master components, which
was previously only working by coincidence:
If a kubelet doesn't register with a master, it never bothers to figure out
what its local address is. In which case it ends up constructing a URL like
http://:8080/healthz for the http probe. This happens to work on the master
because all of the pods are using host networking and explicitly binding to
127.0.0.1. Once the kubelet is registered with the master and it determines
the local node address, it tries to healthcheck on an address where the pod
isn't listening and the kubelet periodically restarts each master component
when the liveness probe fails.
Docker's v1 registry has gotten slower and slower, and they have no
interest in fixing it. Using a mirror forces v1 mode. Measurements
show that v1 with our mirror is slower than v2 with docker's registry in
just about all metrics.
* Set SHA1 for Kubernetes server binary and Salt tar in kube-env.
* Check SHA1 in configure-vm.sh. If the env variable isn't available,
download the SHA1 from GCS and double check that.
* Fixes a bug in the devel path where we were actually uploading the
wrong sha1 to the bucket.
Fixes#10021
Fixes#8196. Maybe. If my theory is correct on how we got there. Also
changes the inference of master to be based on the master name, not
the node instance prefix. That way if we somehow have a bogus
hostname, the master will configure itself as a node, the whole
cluster fails, and it's a ton more obvious.
Next ContainerVM image will have SaltStack in it. Also be a little
less persnickety if it's found running. This isn't the case, but we
don't have to be aggressive.
Tested on GCE.
Includes untested modifications for AWS and Vagrant.
No changes for any other distros.
Probably will work on other up-to-date providers
but beware. Symptom would be that service proxying
stops working.
1. Generates a token kube-proxy in AWS, GCE, and Vagrant setup scripts.
1. Distributes the token via salt-overlay, and salt to /var/lib/kube-proxy/kubeconfig
1. Changes kube-proxy args:
- use the --kubeconfig argument
- changes --master argument from http://MASTER:7080 to https://MASTER
- http -> https
- explicit port 7080 -> implied 443
Possible ways this might break other distros:
Mitigation: there is an default empty kubeconfig file.
If the distro does not populate the salt-overlay, then
it should get the empty, which parses to an empty
object, which, combined with the --master argument,
should still work.
Mitigation:
- azure: Special case to use 7080 in
- rackspace: way out of date, so don't care.
- vsphere: way out of date, so don't care.
- other distros: not using salt.
ensure-kube-token is not needed anymore because
the token passed in kube-env.
In the up case it is set, in the push case it is an empty string
but not used.
Allow unset KUBELET_TOKEN (for push case).
Fix comment.
- Configure the apiserver to listen securely on 443 instead of 6443.
- Configure the kubelet to connect to 443 instead of 6443.
- Update documentation to refer to bearer tokens instead of basic auth.
Generates the new token on AWS, GCE, Vagrant.
Renames instance metadata from "kube-token" to "kubelet-token".
(Is this okay for GKE?)
Having separate tokens for kubelet and kube-proxy permits
using principle of least privilege, makes it easy to
rate limit the clients separately, allows annotation
of apiserver logs with the client identity at a finer grain
than just source-ip.
* Fixes an issue where salt-minion would actually come up after reboot
(upstart is horrible obnoxious)
* Caches .deb downloads
* Handles PD remount on reboot correctly
* Notes a future optimization
Fixes#5666
Address #6075: Shoot the master VM while saving the master-pd. This
takes a couple of minor changes to configure-vm.sh, some of which also
would be necessary for reboot. In particular, I changed it so that the
kube-token instance metadata is no longer required after inception;
instead, we mount the master-pd and see if we've already created the
known tokens file before blocking on the instance metadata.
Also partially addresses #6099 in bash by refactoring the kube-push
path.
These secrets will be used in subsequent PRs by:
scheduler, controller-manager, monitoring services,
logging services, and skydns.
Each of these services will then be able to stop using kubernetes-ro
or host networking.
It looks like api_servers finally won this battle. Kill off the
last remaining places passing it, but allow the kubelet Salt to
accept apiservers for a period of time.
(This was bothering my OCD.)
We were actually failing to call brctl in configure-vm.sh. I finally
tracked it down to the attempt to delete the docker0 bridge. This
particular package was getting installed later by Salt anyways, so
all this PR is doing is moving the package install up from Salt to
bash.
Also adds some minor logging.