Update the existing salt.md, add the start of a Salt README tree

Starts to fix #6070
pull/6/head
Zach Loafman 2015-03-31 11:40:13 -07:00
parent e912d5204c
commit c292d2e8d6
6 changed files with 91 additions and 5 deletions

View File

@ -0,0 +1,17 @@
# SaltStack configuration
This is the root of the SaltStack configuration for Kubernetes. A high
level overview for the Kubernetes SaltStack configuration can be found [in the docs tree.](../../docs/salt.md)
This SaltStack configuration currently applies to default
configurations for Debian-on-GCE, Fedora-on-Vagrant, Ubuntu-on-AWS and
Ubuntu-on-Azure. (That doesn't mean it can't be made to apply to an
arbitrary configuration, but those are only the in-tree OS/IaaS
combinations supported today.) As you peruse the configuration, these
are shorthanded as `gce`, `vagrant`, `aws`, `azure` in `grains.cloud`;
the documentation in this tree uses this same shorthand for convenience.
See more:
* [pillar](pillar/)
* [reactor](reactor/)
* [salt](salt/)

View File

@ -15,7 +15,7 @@
# limitations under the License.
# This script will set up the salt directory on the target server. It takes one
# argument that is a tarball with the pre-compiled kuberntes server binaries.
# argument that is a tarball with the pre-compiled kubernetes server binaries.
set -o errexit
set -o nounset

View File

@ -0,0 +1,19 @@
The
[SaltStack pillar](http://docs.saltstack.com/en/latest/topics/pillar/)
data is partially statically dervied from the contents of this
directory. The bulk of the pillars are hard to perceive from browsing
this directory, though, because they are written into
[cluster-params.sls](cluster-params.sls) at cluster inception.
* [cluster-params.sls](cluster-params.sls) is generated entirely at cluster inception. See e.g. [configure-vm.sh](../../gce/configure-vm.sh#L226)
* [docker-images.sls](docker-images.sls) stores the Docker tags of the current Docker-wrapped server binaries, twiddling by the Salt install script
* [logging.sls](logging.sls) defines the cluster log level
* [mine.sls](mine.sls): defines the variables shared across machines in the Salt
mine. It is starting to be largely deprecated in use, and is totally
unavailable on GCE, which runs standalone.
* [privilege.sls](privilege.sls) defines whether privileged containers are allowed.
* [top.sls](top.sls) defines which pillars are active across the cluster.
## Future work
Document the current pillars across providers

View File

@ -0,0 +1,3 @@
[SaltStack reactor](http://docs.saltstack.com/en/latest/topics/reactor/) files, largely defining reactions to new nodes.
**Ignored for GCE, which runs standalone on each machine**

View File

@ -0,0 +1,34 @@
This directory forms the base of the main SaltStack configuration. The
place to start with any SaltStack configuration is
[top.sls](top.sls). However, unless you are particularly keen on
reading Jinja templates, the following tables break down what
configurations run on what providers. (NB: The [_states](_states/)
directory is a special directory included by Salt for `ensure` blocks,
and is only used for the [docker](docker/) config.)
Key: M = Config applies to master, n = config applies to nodes
Config | GCE | Vagrant | AWS | Azure
----------------------------------------------------|-------|---------|-----|------
[cadvisor](cadvisor/) | M n | M n | M n | M n
[debian-auto-upgrades](debian-auto-upgrades/) | M n | M n | M n | M n
[docker](docker/) | M n | M n | M n | n
[etcd](etcd/) | M | M | M | M
[fluentd-es](fluentd-es/) (pillar conditional) | M n | M n | M n | M n
[fluentd-gcp](fluentd-gcp/) (pillar conditional) | M n | M n | M n | M n
[generate-cert](generate-cert/) | M | M | M | M
[kube-addons](kube-addons/) | M | M | M | M
[kube-apiserver](kube-apiserver/) | M | M | M | M
[kube-controller-manager](kube-controller-manager/) | M | M | M | M
[kube-proxy](kube-proxy/) | n | n | n | n
[kube-scheduler](kube-scheduler/) | M | M | M | M
[kubelet](kubelet/) | M n | M n | M n | n
[logrotate](logrotate/) | M n | n | M n | M n
[monit](monit/) | M n | M n | M n | M n
[nginx](nginx/) | M | M | M | M
[openvpn-client](openvpn-client/) | | | | n
[openvpn](openvpn/) | | | | M
[sdn](sdn/) (Vagrant only) | n | M n | n |
[static-routes](static-routes/) (vsphere only) | | | |
[base](base.sls) | M n | M n | M n | M n
[kube-client-tools](kube-client-tools.sls) | M | M | M | M

View File

@ -6,11 +6,11 @@ The Salt scripts are shared across multiple hosting providers, so it's important
## Salt cluster setup
The **salt-master** service runs on the kubernetes-master node.
The **salt-master** service runs on the kubernetes-master node [(except on the default GCE setup)](#standalone-salt-configuration-on-gce).
The **salt-minion** service runs on the kubernetes-master node and each kubernetes-minion node in the cluster.
Each salt-minion service is configured to interact with the **salt-master** service hosted on the kubernetes-master via the **master.conf** file.
Each salt-minion service is configured to interact with the **salt-master** service hosted on the kubernetes-master via the **master.conf** file [(except on GCE)](#standalone-salt-configuration-on-gce).
```
[root@kubernetes-master] $ cat /etc/salt/minion.d/master.conf
@ -20,8 +20,16 @@ The salt-master is contacted by each salt-minion and depending upon the machine
If you are running the Vagrant based environment, the **salt-api** service is running on the kubernetes-master. It is configured to enable the vagrant user to introspect the salt cluster in order to find out about machines in the Vagrant environment via a REST API.
## Standalone Salt Configuration on GCE
On GCE, the master and nodes are all configured as [standalone minions](http://docs.saltstack.com/en/latest/topics/tutorials/standalone_minion.html). The configuration for each VM is derived from the VM's [instance metadata](https://cloud.google.com/compute/docs/metadata) and then stored in Salt grains (`/etc/salt/minion.d/grains.conf`) and pillars (`/srv/salt-overlay/pillar/cluster-params.sls`) that local Salt uses to enforce state.
All remaining sections that refer to master/minion setups should be ignored for GCE. One fallout of the GCE setup is that the Salt mine doesn't exist - there is no sharing of configuration amongst nodes.
## Salt security
*(Not applicable on default GCE setup.)*
Security is not enabled on the salt-master, and the salt-master is configured to auto-accept incoming requests from minions. It is not recommended to use this security configuration in production environments without deeper study. (In some environments this isn't as bad as it might sound if the salt master port isn't externally accessible and you trust everyone on your network.)
```
@ -29,6 +37,7 @@ Security is not enabled on the salt-master, and the salt-master is configured to
open_mode: True
auto_accept: True
```
## Salt minion configuration
Each minion in the salt cluster has an associated configuration that instructs the salt-master how to provision the required resources on the machine.
@ -53,8 +62,8 @@ Key | Value
`api_servers` | (Optional) The IP address / host name where a kubelet can get read-only access to kube-apiserver
`cbr-cidr` | (Optional) The minion IP address range used for the docker container bridge.
`cloud` | (Optional) Which IaaS platform is used to host kubernetes, *gce*, *azure*, *aws*, *vagrant*
`etcd_servers` | (Optional) Comma-delimited list of IP addresses the kube-apiserver and kubelet use to reach etcd. Uses the IP of the first machine in the kubernetes_master role.
`hostnamef` | (Optional) The full host name of the machine, i.e. hostname -f
`etcd_servers` | (Optional) Comma-delimited list of IP addresses the kube-apiserver and kubelet use to reach etcd. Uses the IP of the first machine in the kubernetes_master role, or 127.0.0.1 on GCE.
`hostnamef` | (Optional) The full host name of the machine, i.e. hostname -f (only used on Azure)
`node_ip` | (Optional) The IP address to use to address this node
`minion_ip` | (Optional) Mapped to the kubelet hostname_override, K8S TODO - change this name
`network_mode` | (Optional) Networking model to use among nodes: *openvswitch*
@ -83,3 +92,7 @@ In addition, a cluster may be running a Debian based operating system or Red Hat
Per pod IP configuration is provider specific, so when making networking changes, its important to sand-box these as all providers may not use the same mechanisms (iptables, openvswitch, etc.)
We should define a grains.conf key that captures more specifically what network configuration environment is being used to avoid future confusion across providers.
## Further reading
The [cluster/saltbase](../cluster/saltbase) tree has more details on the current SaltStack configuration.