k3s/cluster/addons
Kubernetes Submit Queue fb340a4695
Merge pull request #57824 from thockin/gcr-vanity
Automatic merge from submit-queue (batch tested with PRs 57824, 58806, 59410, 59280). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

2nd try at using a vanity GCR name

The 2nd commit here is the changes relative to the reverted PR.  Please focus review attention on that.

This is the 2nd attempt.  The previous try (#57573) was reverted while we
figured out the regional mirrors (oops).
    
New plan: k8s.gcr.io is a read-only facade that auto-detects your source
region (us, eu, or asia for now) and pulls from the closest.  To publish
an image, push k8s-staging.gcr.io and it will be synced to the regionals
automatically (similar to today).  For now the staging is an alias to
gcr.io/google_containers (the legacy URL).
    
When we move off of google-owned projects (working on it), then we just
do a one-time sync, and change the google-internal config, and nobody
outside should notice.
    
We can, in parallel, change the auto-sync into a manual sync - send a PR
to "promote" something from staging, and a bot activates it.  Nice and
visible, easy to keep track of.

xref https://github.com/kubernetes/release/issues/281

TL;DR:
  *  The new `staging-k8s.gcr.io` is where we push images.  It is literally an alias to `gcr.io/google_containers` (the existing repo) and is hosted in the US.
  * The contents of `staging-k8s.gcr.io` are automatically synced to `{asia,eu,us)-k8s.gcr.io`.
  * The new `k8s.gcr.io` will be a read-only alias to whichever regional repo is closest to you.
  * In the future, images will be promoted from `staging` to regional "prod" more explicitly and auditably.

 ```release-note
Use "k8s.gcr.io" for pulling container images rather than "gcr.io/google_containers".  Images are already synced, so this should not impact anyone materially.
    
Documentation and tools should all convert to the new name. Users should take note of this in case they see this new name in the system.
```
2018-02-08 03:29:32 -08:00
..
addon-manager Switch to k8s.gcr.io vanity domain 2018-02-07 21:14:19 -08:00
calico-policy-controller Switch to k8s.gcr.io vanity domain 2018-02-07 21:14:19 -08:00
cluster-loadbalancing Switch to k8s.gcr.io vanity domain 2018-02-07 21:14:19 -08:00
cluster-monitoring Switch to k8s.gcr.io vanity domain 2018-02-07 21:14:19 -08:00
dashboard Switch to k8s.gcr.io vanity domain 2018-02-07 21:14:19 -08:00
device-plugins/nvidia-gpu Switch to k8s.gcr.io vanity domain 2018-02-07 21:14:19 -08:00
dns Switch to k8s.gcr.io vanity domain 2018-02-07 21:14:19 -08:00
dns-horizontal-autoscaler Switch to k8s.gcr.io vanity domain 2018-02-07 21:14:19 -08:00
etcd-empty-dir-cleanup Switch to k8s.gcr.io vanity domain 2018-02-07 21:14:19 -08:00
fluentd-elasticsearch Switch to k8s.gcr.io vanity domain 2018-02-07 21:14:19 -08:00
fluentd-gcp Switch to k8s.gcr.io vanity domain 2018-02-07 21:14:19 -08:00
ip-masq-agent Switch to k8s.gcr.io vanity domain 2018-02-07 21:14:19 -08:00
kube-proxy Add wildcard tolerations to kube-proxy. 2017-11-29 12:36:58 -08:00
metadata-agent Fix RBAC permissions for metadata agent. 2018-02-06 13:47:37 +01:00
metadata-proxy Switch to k8s.gcr.io vanity domain 2018-02-07 21:14:19 -08:00
metrics-server Switch to k8s.gcr.io vanity domain 2018-02-07 21:14:19 -08:00
node-problem-detector Switch to k8s.gcr.io vanity domain 2018-02-07 21:14:19 -08:00
python-image Switch to k8s.gcr.io vanity domain 2018-02-07 21:14:19 -08:00
rbac gce: split legacy kubelet node role binding and bootstrapper role binding 2017-12-13 21:56:18 -05:00
storage-class [addon/storage-class] update storageclass groupversion in storage-class 2017-10-22 19:50:47 +08:00
BUILD Use the pkg_tar wrapper from kubernetes/repo-infra 2018-01-18 17:10:16 -08:00
README.md Updated cluster/addons readme to match and point to docs 2017-10-18 10:36:24 -04:00

README.md

Legacy Cluster add-ons

For more information on add-ons see the documentation.

Overview

Cluster add-ons are resources like Services and Deployments (with pods) that are shipped with the Kubernetes binaries and are considered an inherent part of the Kubernetes clusters.

There are currently two classes of add-ons:

  • Add-ons that will be reconciled.
  • Add-ons that will be created if they don't exist.

More details could be found in addon-manager/README.md.

Cooperating Horizontal / Vertical Auto-Scaling with "reconcile class addons"

"Reconcile" class addons will be periodically reconciled to the original state given by the initial config. In order to make Horizontal / Vertical Auto-scaling functional, the related fields in config should be left unset. More specifically, leave replicas in ReplicationController / Deployment / ReplicaSet unset for Horizontal Scaling, leave resources for container unset for Vertical Scaling. The periodic reconcile won't clobbered these fields, hence they could be managed by Horizontal / Vertical Auto-scaler.

Add-on naming

The suggested naming for most of the resources is <basename> (with no version number). Though resources like Pod, ReplicationController and DaemonSet are exceptional. It would be hard to update Pod because many fields in Pod are immutable. For ReplicationController and DaemonSet, in-place update may not trigger the underlying pods to be re-created. You probably need to change their names during update to trigger a complete deletion and creation.

Analytics