* Using Fedora 21 as the base box
* Discover the active network interfaces in the box to avoid hardcoding
them in configuration.
* Use the master IP for the certificate.
The default Fedora 21 image requires some manual networking fixup that
breaks Fedora 22. This change ensures that the fixup in question is run
only for Fedora 21.
Variables $ENABLE_CLUSTER_MONITORING and $ENABLE_CLUSTER_UI are currently set in cluster/vagrant/config-default.sh but are not passed to the master VM. Therefore, cluster/saltbase/salt/kube-addons/init.sls does not have these variables, and the add-ons cannot be enabled.
The error message thrown when the KUBERNETES_PROVIDER is vagrant and the
vagrant plugin cannot be found is ambiguous. This does not change
functionality, just provides more feedback as to the source of the
error.
MASTER_IP and MINION_IP_BASE are hard-coded in vagrant's
config-default.sh, and the values correspond to virtualbox's default
subnet. On hosts that have both virtualbox and another provider
installed, attempting to deploy kubernetes with the non-virtualbox
provider is likely to result in broken networking. This change allows
the addresses to be overridden via the environment so that more
appropriate values can be used.
Tested on GCE.
Includes untested modifications for AWS and Vagrant.
No changes for any other distros.
Probably will work on other up-to-date providers
but beware. Symptom would be that service proxying
stops working.
1. Generates a token kube-proxy in AWS, GCE, and Vagrant setup scripts.
1. Distributes the token via salt-overlay, and salt to /var/lib/kube-proxy/kubeconfig
1. Changes kube-proxy args:
- use the --kubeconfig argument
- changes --master argument from http://MASTER:7080 to https://MASTER
- http -> https
- explicit port 7080 -> implied 443
Possible ways this might break other distros:
Mitigation: there is an default empty kubeconfig file.
If the distro does not populate the salt-overlay, then
it should get the empty, which parses to an empty
object, which, combined with the --master argument,
should still work.
Mitigation:
- azure: Special case to use 7080 in
- rackspace: way out of date, so don't care.
- vsphere: way out of date, so don't care.
- other distros: not using salt.
Generates the new token on AWS, GCE, Vagrant.
Renames instance metadata from "kube-token" to "kubelet-token".
(Is this okay for GKE?)
Having separate tokens for kubelet and kube-proxy permits
using principle of least privilege, makes it easy to
rate limit the clients separately, allows annotation
of apiserver logs with the client identity at a finer grain
than just source-ip.
This variable can be entirely derived from grains.cloud, and it
simplifies the configuration somewhat. (Or someone convince me I'm
wrong. I'm happy to be wrong here.)
It looks like api_servers finally won this battle. Kill off the
last remaining places passing it, but allow the kubelet Salt to
accept apiservers for a period of time.
(This was bothering my OCD.)
Deletion is wonderful. The only weird thing was where to put the
message about the proxy URLs. Satnam suggested kubectl clusterinfo,
which seemed like a good option to put at the end of cluster turn-up.
This implements phase 1 of the proposal in #3579, moving the creation
of the pods, RCs, and services to the master after the apiserver is
available.
This is such a wide commit because our existing initial config story
is special:
* Add kube-addons service and associated salt configuration:
** We configure /etc/kubernetes/addons to be a directory of objects
that are appropriately configured for the current cluster.
** "/etc/init.d/kube-addons start" slurps up everything in that dir.
(Most of the difficult is the business logic in salt around getting
that directory built at all.)
** We cheat and overlay cluster/addons into saltbase/salt/kube-addons
as config files for the kube-addons meta-service.
* Change .yaml.in files to salt templates
* Rename {setup,teardown}-{monitoring,logging} to
{setup,teardown}-{monitoring,logging}-firewall to properly reflect
their real purpose now (the purpose of these functions is now ONLY to
bring up the firewall rules, and possibly to relay the IP to the user).
* Rework GCE {setup,teardown}-{monitoring,logging}-firewall: Both
functions were improperly configuring global rules, yet used
lifecycles tied to the cluster. Use $NODE_INSTANCE_PREFIX with the
rule. The logging rule needed a $NETWORK specifier. The monitoring
rule tried gcloud describe first, but given the instancing, this feels
like a waste of time now.
* Plumb ENABLE_CLUSTER_MONITORING, ENABLE_CLUSTER_LOGGING,
ELASTICSEARCH_LOGGING_REPLICAS and DNS_REPLICAS down to the master,
since these are needed there now.
(Desperately want just a yaml or json file we can share between
providers that has all this crap. Maybe #3525 is an answer?)
Huge caveats: I've gone pretty firm testing on GCE, including
twiddling the env variables and making sure the objects I expect to
come up, come up. I've tested that it doesn't break GKE bringup
somehow. But I haven't had a chance to test the other providers.
* Have a single config file that mirrors other cluster providers
* Warn users not to use 'vagrant up' directly
* Allow 'extra' parameters to the docker daemon. Fixes#2685
* Renumbers things so that they are more sane. Master/minions are 10.245.1.x, container subnets are 10.246.x.1/24, portal is 10.247.0.0/16
Quoting is hard. When writing strings into YAML files, wrap them in single quotes. Also escape any embedded single quotes in those strings via a double signle quote ('').
Most of platforms use ~/.kubernetes_auth, but Vagrant is different.
This commit fixes one instance where a setup script did not take this
difference into account.
Also fix up cert generation. It was failing during the first salt highstate when trying to chown the certs as the apiserver user didn't exist yet. Fix this by creating a 'kube-cert' group and chgrping the files to that. Then make the apiserver a member of that group.
Fixes#2365Fixes#2368
apiserver becomes kube-apiserver
controller-manager -> kube-controller-manager
scheduler and proxy similarly.
Only thing I promise is that right now hack/build-go.sh and
build/release.sh exit with 0. That's it. Who knows if any of this
actually works....
* Rewrite a bunch of the hack/ directory with modular reusable bash libraries.
* Have 'build/*' build on 'hack/*'. The stuff in build now just runs hack/* in a docker container.
* Use a docker data container to enable faster incremental builds.
* Standardize output to _output/{local,dockerized}/bin/OS/ARCH/*. This regularized placement makes cross compilation work.
* Move travis specific scripts under hack/travis
With new dockerized incremental builds, I can do a no-op `make quick-release` in ~30s. This is a significant improvement.