k3s/cluster/saltbase/salt/top.sls

100 lines
2.9 KiB
Plaintext
Raw Normal View History

2014-06-06 23:40:48 +00:00
base:
'*':
- base
- debian-auto-upgrades
- salt-helpers
{% if grains.get('cloud') == 'aws' %}
- ntp
{% endif %}
{% if pillar.get('e2e_storage_test_environment', '').lower() == 'true' %}
- e2e
{% endif %}
2014-06-06 23:40:48 +00:00
'roles:kubernetes-pool':
- match: grain
- docker
2015-11-24 02:11:51 +00:00
{% if pillar.get('network_provider', '').lower() == 'flannel' %}
2015-09-06 18:10:33 +00:00
- flannel
{% endif %}
{% if pillar.get('policy_provider', '').lower() == 'calico' %}
- cni
{% elif pillar.get('network_provider', '').lower() == 'kubenet' %}
- cni
{% elif pillar.get('network_provider', '').lower() == 'cni' %}
- cni
2015-11-24 02:11:51 +00:00
{% endif %}
- helpers
- kube-client-tools
2015-11-04 18:59:16 +00:00
- kube-node-unpacker
2014-06-06 23:40:48 +00:00
- kubelet
{% if pillar.get('network_provider', '').lower() == 'opencontrail' %}
- opencontrail-networking-minion
{% else %}
2014-06-06 23:40:48 +00:00
- kube-proxy
{% endif %}
2015-04-03 00:21:13 +00:00
{% if pillar.get('enable_node_logging', '').lower() == 'true' and pillar['logging_destination'] is defined %}
{% if pillar['logging_destination'] == 'elasticsearch' %}
- fluentd-es
2015-04-03 00:21:13 +00:00
{% elif pillar['logging_destination'] == 'gcp' %}
- fluentd-gcp
2014-11-14 07:07:43 +00:00
{% endif %}
{% endif %}
{% if pillar.get('enable_cluster_registry', '').lower() == 'true' %}
- kube-registry-proxy
2016-05-18 02:16:32 +00:00
{% endif %}
{% if pillar['prepull_e2e_images'] is defined and pillar['prepull_e2e_images'].lower() == 'true' %}
- e2e-image-puller
{% endif %}
2014-09-09 17:37:56 +00:00
- logrotate
- supervisor
{% if pillar.get('policy_provider', '').lower() == 'calico' %}
- calico.node
{% endif %}
2014-06-06 23:40:48 +00:00
'roles:kubernetes-master':
- match: grain
- generate-cert
- etcd
2015-11-24 02:11:51 +00:00
{% if pillar.get('network_provider', '').lower() == 'flannel' %}
- flannel-server
2015-09-06 18:10:33 +00:00
- flannel
{% elif pillar.get('network_provider', '').lower() == 'kubenet' %}
- cni
{% elif pillar.get('network_provider', '').lower() == 'cni' %}
- cni
2016-04-05 00:28:52 +00:00
{% endif %}
{% if pillar.get('enable_l7_loadbalancing', '').lower() == 'glbc' %}
- l7-gcp
2015-11-24 02:11:51 +00:00
{% endif %}
- kube-apiserver
- kube-controller-manager
- kube-scheduler
- supervisor
- kube-client-tools
- kube-master-addons
- kube-admission-controls
{% if pillar.get('enable_node_logging', '').lower() == 'true' and pillar['logging_destination'] is defined %}
{% if pillar['logging_destination'] == 'elasticsearch' %}
- fluentd-es
{% elif pillar['logging_destination'] == 'gcp' %}
- fluentd-gcp
{% endif %}
{% endif %}
{% if grains['cloud'] is defined and grains['cloud'] != 'vagrant' %}
2014-09-09 17:37:56 +00:00
- logrotate
{% endif %}
Deferred creation of SkyDNS, monitoring and logging objects This implements phase 1 of the proposal in #3579, moving the creation of the pods, RCs, and services to the master after the apiserver is available. This is such a wide commit because our existing initial config story is special: * Add kube-addons service and associated salt configuration: ** We configure /etc/kubernetes/addons to be a directory of objects that are appropriately configured for the current cluster. ** "/etc/init.d/kube-addons start" slurps up everything in that dir. (Most of the difficult is the business logic in salt around getting that directory built at all.) ** We cheat and overlay cluster/addons into saltbase/salt/kube-addons as config files for the kube-addons meta-service. * Change .yaml.in files to salt templates * Rename {setup,teardown}-{monitoring,logging} to {setup,teardown}-{monitoring,logging}-firewall to properly reflect their real purpose now (the purpose of these functions is now ONLY to bring up the firewall rules, and possibly to relay the IP to the user). * Rework GCE {setup,teardown}-{monitoring,logging}-firewall: Both functions were improperly configuring global rules, yet used lifecycles tied to the cluster. Use $NODE_INSTANCE_PREFIX with the rule. The logging rule needed a $NETWORK specifier. The monitoring rule tried gcloud describe first, but given the instancing, this feels like a waste of time now. * Plumb ENABLE_CLUSTER_MONITORING, ENABLE_CLUSTER_LOGGING, ELASTICSEARCH_LOGGING_REPLICAS and DNS_REPLICAS down to the master, since these are needed there now. (Desperately want just a yaml or json file we can share between providers that has all this crap. Maybe #3525 is an answer?) Huge caveats: I've gone pretty firm testing on GCE, including twiddling the env variables and making sure the objects I expect to come up, come up. I've tested that it doesn't break GKE bringup somehow. But I haven't had a chance to test the other providers.
2015-01-18 23:16:52 +00:00
- kube-addons
{% if grains['cloud'] is defined and grains['cloud'] in [ 'vagrant', 'gce', 'aws', 'vsphere', 'photon-controller', 'openstack'] %}
- docker
2015-03-13 19:03:28 +00:00
- kubelet
{% endif %}
{% if pillar.get('network_provider', '').lower() == 'opencontrail' %}
- opencontrail-networking-master
{% endif %}
{% if pillar.get('enable_node_autoscaler', '').lower() == 'true' %}
- cluster-autoscaler
{% endif %}
{% if pillar.get('policy_provider', '').lower() == 'calico' %}
- calico.master
{% endif %}