k3s/cluster/addons
Kubernetes Submit Queue de694a8aa6
Merge pull request #58391 from kawych/ms_reduction
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Reduce Metrics Server memory requirement

**What this PR does / why we need it**:
Reduces memory requirements of Metrics Server.

This was tested on GCE. On 16 - node cluster with 30 user pods per node, Metrics Server consumes ~60MB of memory. For larger clusters, the base value matters even less, and the memory utilization will be lower, therefore this change is safe.

**Release note**:
```release-note
Reduce Metrics Server memory requirement
```
2018-01-18 06:06:41 -08:00
..
addon-manager bump addon version in makefile 2018-01-17 12:53:27 -05:00
calico-policy-controller Revert k8s.gcr.io vanity domain 2017-12-22 14:36:16 -08:00
cluster-loadbalancing Update defaultbackend image to 1.4 and deployment apiVersion to apps/v1 2018-01-05 11:09:54 +08:00
cluster-monitoring Fix errors in Heapster deployment for google sink 2018-01-05 17:37:56 +01:00
dashboard Revert k8s.gcr.io vanity domain 2017-12-22 14:36:16 -08:00
device-plugins/nvidia-gpu Update nvidia-gpu-device-plugin addon. 2017-12-12 20:53:27 -08:00
dns Update kube-dns to 1.14.8 2018-01-05 15:00:40 -08:00
dns-horizontal-autoscaler Revert k8s.gcr.io vanity domain 2017-12-22 14:36:16 -08:00
etcd-empty-dir-cleanup Revert k8s.gcr.io vanity domain 2017-12-22 14:36:16 -08:00
fluentd-elasticsearch Merge pull request #58419 from coffeepac/apps-api-stable 2018-01-18 05:07:30 -08:00
fluentd-gcp Bump fluentd-gcp version 2018-01-12 10:16:13 -08:00
ip-masq-agent Revert k8s.gcr.io vanity domain 2017-12-22 14:36:16 -08:00
kube-proxy Add wildcard tolerations to kube-proxy. 2017-11-29 12:36:58 -08:00
metadata-agent Fix configuration of Metadata Agent daemon set 2017-11-29 15:30:36 +01:00
metadata-proxy Bump metadata proxy and test versions 2018-01-02 11:40:10 -08:00
metrics-server Merge pull request #58391 from kawych/ms_reduction 2018-01-18 06:06:41 -08:00
node-problem-detector Revert k8s.gcr.io vanity domain 2017-12-22 14:36:16 -08:00
python-image Revert k8s.gcr.io vanity domain 2017-12-22 14:36:16 -08:00
rbac gce: split legacy kubelet node role binding and bootstrapper role binding 2017-12-13 21:56:18 -05:00
registry Revert k8s.gcr.io vanity domain 2017-12-22 14:36:16 -08:00
storage-class [addon/storage-class] update storageclass groupversion in storage-class 2017-10-22 19:50:47 +08:00
BUILD Run hack/update-bazel.sh to generate BUILD files 2017-08-02 18:33:25 -07:00
README.md Updated cluster/addons readme to match and point to docs 2017-10-18 10:36:24 -04:00

README.md

Legacy Cluster add-ons

For more information on add-ons see the documentation.

Overview

Cluster add-ons are resources like Services and Deployments (with pods) that are shipped with the Kubernetes binaries and are considered an inherent part of the Kubernetes clusters.

There are currently two classes of add-ons:

  • Add-ons that will be reconciled.
  • Add-ons that will be created if they don't exist.

More details could be found in addon-manager/README.md.

Cooperating Horizontal / Vertical Auto-Scaling with "reconcile class addons"

"Reconcile" class addons will be periodically reconciled to the original state given by the initial config. In order to make Horizontal / Vertical Auto-scaling functional, the related fields in config should be left unset. More specifically, leave replicas in ReplicationController / Deployment / ReplicaSet unset for Horizontal Scaling, leave resources for container unset for Vertical Scaling. The periodic reconcile won't clobbered these fields, hence they could be managed by Horizontal / Vertical Auto-scaler.

Add-on naming

The suggested naming for most of the resources is <basename> (with no version number). Though resources like Pod, ReplicationController and DaemonSet are exceptional. It would be hard to update Pod because many fields in Pod are immutable. For ReplicationController and DaemonSet, in-place update may not trigger the underlying pods to be re-created. You probably need to change their names during update to trigger a complete deletion and creation.

Analytics