Change the .kubeconfig context that gce kube-up creates to project
+ instance prefix, so you can spin up clusters with the same name
in different compute projects without overwriting .kubeconfig.
In config-{default,test.sh}. This will make it possible for e.g.
Jenkins to override LOGGING_DESTINATION. Also reorder the parameters
so they're in the same order across files for easier scanning.
Adds labels to the services, waits for them to be created (which
should be instant, but just in case), query the forwarding rules like
as we did before.
Fixes#3893
This implements phase 1 of the proposal in #3579, moving the creation
of the pods, RCs, and services to the master after the apiserver is
available.
This is such a wide commit because our existing initial config story
is special:
* Add kube-addons service and associated salt configuration:
** We configure /etc/kubernetes/addons to be a directory of objects
that are appropriately configured for the current cluster.
** "/etc/init.d/kube-addons start" slurps up everything in that dir.
(Most of the difficult is the business logic in salt around getting
that directory built at all.)
** We cheat and overlay cluster/addons into saltbase/salt/kube-addons
as config files for the kube-addons meta-service.
* Change .yaml.in files to salt templates
* Rename {setup,teardown}-{monitoring,logging} to
{setup,teardown}-{monitoring,logging}-firewall to properly reflect
their real purpose now (the purpose of these functions is now ONLY to
bring up the firewall rules, and possibly to relay the IP to the user).
* Rework GCE {setup,teardown}-{monitoring,logging}-firewall: Both
functions were improperly configuring global rules, yet used
lifecycles tied to the cluster. Use $NODE_INSTANCE_PREFIX with the
rule. The logging rule needed a $NETWORK specifier. The monitoring
rule tried gcloud describe first, but given the instancing, this feels
like a waste of time now.
* Plumb ENABLE_CLUSTER_MONITORING, ENABLE_CLUSTER_LOGGING,
ELASTICSEARCH_LOGGING_REPLICAS and DNS_REPLICAS down to the master,
since these are needed there now.
(Desperately want just a yaml or json file we can share between
providers that has all this crap. Maybe #3525 is an answer?)
Huge caveats: I've gone pretty firm testing on GCE, including
twiddling the env variables and making sure the objects I expect to
come up, come up. I've tested that it doesn't break GKE bringup
somehow. But I haven't had a chance to test the other providers.
Removed auth for Grafana to facilitate usage via service proxy on the api-server.
Added a grafana service
Removed elasticsearch dependency for monitoring - faster startup times.
Removed auth for Grafana to facilitate usage via service proxy on the api-server.
Added a grafana service
Removed elasticsearch dependency for monitoring - faster startup times.
persistent disk when running on GCE.
I'll follow up soon with a second PR that enables kube-push to
completely bring down the master VM and replace it with a new one.
Quoting is hard. When writing strings into YAML files, wrap them in single quotes. Also escape any embedded single quotes in those strings via a double signle quote ('').
Configure apiserver to serve Securely on port 6443.
Generate token for kubelets during master VM startup.
Put token into file apiserver can get and another file the kubelets can get.
Added e2e test.
This change refactors the way Kubelet's DockerPuller handles the docker config credentials to utilize a new credentialprovider library.
The credentialprovider library is based on several of the files from the Kubelet's dockertools directory, but supports a new pluggable model for retrieving a .dockercfg-compatible JSON blob with credentials.
With this change, the Kubelet will lazily ask for the docker config from a set of DockerConfigProvider extensions each time it needs a credential.
This change provides common implementations of DockerConfigProvider for:
- "Default": load .dockercfg from disk
- "Caching": wraps another provider in a cache that expires after a pre-specified lifetime.
GCP-only:
- "google-dockercfg": reads a .dockercfg from a GCE instance's metadata
- "google-dockercfg-url": reads a .dockercfg from a URL specified in a GCE instance's metadata.
- "google-container-registry": reads an access token from GCE metadata into a password field.
Also make downloading more reliable and run 'highstate' after install for good measure. As part of this we no longer use gsutil to download and have to make 'staged' binaries in GCS publicly readable.
This lets us blow away salt files and replace them with a new version while keeping a tree of "overlay" files that are cluster specific and generated at cluster up time.
Fixes#1783
Adds GCEPersistentDisk volume struct
Adds gce-utils to attach disk to kubelet's VM.
Updates config to give compute-rw to every minion.
Adds GCEPersistentDisk to API
Adds ability to mount attached disks
Generalizes PD and adds tests.
PD now uses an pluggable API interface.
Unit Tests more cleanly separates TearDown and SetUp
Modify boilerplate hook to omit build tags
Adds Mounter interface; mount is now built by OS
TearDown() for PD now detaches disk on final refcount
Un-generalized PD; GCE calls moved to cloudprovider
Address comments.
Otherwise KUBE_MASTER_IP and KUBE_MINION_IP_ADDRESSES may contain 'external-ip'
$ detect-master
Using master: kubernetes-master (external IP: external-ip)'