k3s/cluster
Kubernetes Submit Queue 4f91113075
Merge pull request #54826 from mindprince/addon-manager
Automatic merge from submit-queue (batch tested with PRs 54826, 53576, 55591, 54946, 54825). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Run nvidia-gpu device-plugin daemonset as an addon on GCE nodes that have nvidia GPUs attached

- Instead of the old `Accelerators` feature that added `alpha.kubernetes.io/nvidia-gpu` resource, use the new `DevicePlugins` feature that adds vendor specific resources. (In case of nvidia GPUs it will
add `nvidia.com/gpu` resource.)

- Add node label to GCE nodes with accelerators attached. This node label is the same as what GKE attaches to node pools with accelerators attached. (For example, for nvidia-tesla-p100 GPU, the label would be `cloud.google.com/gke-accelerator=nvidia-tesla-p100`) This will help us target accelerator specific
daemonsets etc. to these nodes.

- Run nvidia-gpu device-plugin daemonset as an addon on GCE nodes that have nvidia GPUs attached.

- Some minor documentation improvements in addon manager.

**Release note**:
```release-note
GCE nodes with NVIDIA GPUs attached now expose `nvidia.com/gpu` as a resource instead of `alpha.kubernetes.io/nvidia-gpu`.
```

/sig cluster-lifecycle
/sig scheduling
/area hw-accelerators

https://github.com/kubernetes/features/issues/368
2017-11-13 14:46:55 -08:00
..
addons Merge pull request #54826 from mindprince/addon-manager 2017-11-13 14:46:55 -08:00
aws
centos Merge pull request #54356 from zouyee/centos-1 2017-10-24 19:02:25 -07:00
gce Merge pull request #54826 from mindprince/addon-manager 2017-11-13 14:46:55 -08:00
images Merge pull request #54250 from ixdy/debian-hyperkube-base-ssh 2017-10-27 14:38:23 -07:00
juju Add extra-args configs to kubernetes-worker charm 2017-11-08 12:49:37 -06:00
kubemark Merge pull request #53974 from shyamjvs/auto-calculate-kubemark-disk 2017-10-16 07:35:32 -07:00
kubernetes-anywhere Fix log collection for kubeadm-gce tests 2017-10-26 07:57:42 -04:00
lib
libvirt-coreos fix kubemark, juju, and libvirt-coreos README.md (from minion to node) 2017-10-10 06:45:15 +00:00
local
log-dump Support collecting log for alternative container runtime in e2e test. 2017-11-10 18:46:48 +00:00
openstack-heat
photon-controller fix typos: remove duplicated word in comments 2017-09-16 14:38:10 +08:00
pre-existing
saltbase Bump Cluster Autoscaler version to 1.1.0-alpha1 2017-11-13 19:00:37 +01:00
skeleton Conditionally run detect-project in log-dump 2017-09-21 13:41:30 +08:00
vagrant Remove all traces of federation 2017-10-26 13:37:37 -07:00
vsphere
windows
BUILD Merge pull request #53034 from tallclair/gce-addons 2017-10-31 09:12:55 -07:00
OWNERS
README.md
clientbin.sh
common.sh Merge pull request #54826 from mindprince/addon-manager 2017-11-13 14:46:55 -08:00
get-kube-binaries.sh
get-kube-local.sh
get-kube.sh remove rackspace related code 2017-09-22 18:06:50 +08:00
kube-down.sh
kube-push.sh
kube-up.sh
kube-util.sh Do not clobber KUBERNETES_PROVIDER - fix kubeadm/gce log collection 2017-10-30 17:33:08 -04:00
kubeadm.sh
kubectl.sh
options.md
restore-from-backup.sh
test-e2e.sh
test-network.sh
test-smoke.sh
update-storage-objects.sh Change RBAC storage version to v1 for 1.9 2017-09-25 10:02:21 -04:00
validate-cluster.sh

README.md

Cluster Configuration

Deprecation Notice: This directory has entered maintenance mode and will not be accepting new providers. Please submit new automation deployments to kube-deploy. Deployments in this directory will continue to be maintained and supported at their current level of support.

The scripts and data in this directory automate creation and configuration of a Kubernetes cluster, including networking, DNS, nodes, and master components.

See the getting-started guides for examples of how to use the scripts.

cloudprovider/config-default.sh contains a set of tweakable definitions/parameters for the cluster.

The heavy lifting of configuring the VMs is done by SaltStack.

Analytics