- [Inspect pods and services](#inspect-pods-and-services)
- [Try Examples](#try-examples)
- [Running the Conformance Test](#running-the-conformance-test)
- [Networking](#networking)
- [Getting Help](#getting-help)
## Designing and Preparing
### Learning
1. You should be familiar with using Kubernetes already. We suggest you set
up a temporary cluster by following one of the other Getting Started Guides.
This will help you become familiar with the CLI ([kubectl](../kubectl.md)) and concepts ([pods](../pods.md), [services](../services.md), etc.) first.
1. You should have `kubectl` installed on your desktop. This will happen as a side
effect of completing one of the other Getting Started Guides.
### Cloud Provider
Kubernetes has the concept of a Cloud Provider, which is a module which provides
an interface for managing TCP Load Balancers, Nodes (Instances) and Networking Routes.
The interface is defined in `pkg/cloudprovider/cloud.go`. It is possible to
create a custom cluster without implementing a cloud provider (for example if using
bare-metal), and not all parts of the interface need to be implemented, depending
on how flags are set on various components.
### Nodes
- You can use virtual or physical machines.
- While you can build a cluster with 1 machine, in order to run all the examples and tests you
need at least 4 nodes.
- Many Getting-started-guides make a distinction between the master node and regular nodes. This
is not strictly necessary.
- Nodes will need to run some version of Linux with the x86_64 architecture. It may be possible
to run on other OSes and Architectures, but this guide does not try to assist with that.
- Apiserver and etcd together are fine on a machine with 1 core and 1GB RAM for clusters with 10s of nodes.
Larger or more active clusters may benefit from more cores.
- Other nodes can have any reasonable amount of memory and any number of cores. They need not
have identical configurations.
### Network
Kubernetes has a distinctive [networking model](../networking.md).
Kubernetes allocates an IP address to each pod and creates a virtual ethernet device for each
Pod. When creating a cluster, you need to allocate a block of IPs for Kubernetes to use
as Pod IPs. The normal approach is to allocate a different block to each node in the cluster
as the node is added. A process in one pod should be able to communicate with another pod
using the IP of the second pod. This connectivity can be accomplished in two ways:
- Configure network to route Pod IPs
- Harder to setup from scratch.
- The Google Compute Engine ([GCE](gce.md)) and [AWS](aws.md) guides use this approach.
- Need to make the Pod IPs routable by programming routers, switches, etc.
- Can be configured external to kubernetes, or can implement in the "Routes" interface of a Cloud Provider module.
- Create an Overlay network
- Easier to setup
- Traffic is encapsulated, so per-pod IPs are routable.
- Examples:
- [Flannel](https://github.com/coreos/flannel)
- [Weave](http://weave.works/)
- [Open vSwitch (OVS)](http://openvswitch.org/)
- Does not require "Routes" portion of Cloud Provider module.
You need to select an address range for the Pod IPs.
- Various approaches:
- GCE: each project has own `10.0.0.0/8`. Carve off a `/16` from that. Room for several clusters in there.
- AWS: use one VPC for whole organization, carve off a chunk for each cluster. Or use different VPC for different clusters.
- IPv6 not supported yet.
- Allocate one CIDR for PodIPs for each node, or a large CIDR from which
smaller CIDRs are automatically allocated to each node (if nodes are dynamically added).
- You need Max-pods-per-node * max-number-of-nodes-expected IPs. `/24` per node supports 254 pods per machine and is a common choice. If IPs are scarce, a /27 may be sufficient (30 pods per machine).
- e.g. use 10.240.0.0/16 as the range for the cluster, with up to 256 nodes using 10.240.0.0/24 through 10.240.255.0/24, respectively.
- Need to make these routable or connect with overlay.
Kubernetes also allocates an IP to each [service](../services.md). However, service IPs do not necessarily
need to be routable. The kube-proxy takes care of translating Service IPs to Pod IPs before traffic leaves
the node. You do need to Allocate a block of IPs for services. Call this `SERVICE_CLUSTER_IP_RANGE`.
- Use images hosted on Google Container Registry (GCR), such as `gcr.io/google_containers/etcd:2.0.12`
- Use images hosted on [Docker Hub](https://registry.hub.docker.com/u/coreos/etcd/) or [quay.io](https://registry.hub.docker.com/u/coreos/etcd/)
- Use etcd binary included in your OS distro.
- Build your own image
- You can do: `cd kubernetes/cluster/images/etcd; make`
We recommend that you use the etcd version which is provided in the kubernetes binary distribution. The kubernetes binaries in the release
were tested extensively with this version of etcd and not with any other version.
The recommended version number can also be found as the value of `ETCD_VERSION` in `kubernetes/cluster/images/etcd/Makefile`.
The remainder of the document assumes that the image identifiers have been chosen and stored in corresponding env vars. Examples (replace with latest tags and appropriate registry):
If following the HTTPS approach, you will need to prepare certs and credentials.
#### Preparing Certs
You need to prepare several certs:
- The master needs a cert to act as an HTTPS server.
- The kubelets optionally need certs to identify themselves as clients of the master, and when
serving its own API over HTTPS.
Unless you plan to have a real CA generate your certs, you will need to generate a root cert and use that to sign the master, kubelet, and kubectl certs.
- see function `create-certs` in `cluster/gce/util.sh`
- see also `cluster/saltbase/salt/generate-cert/make-ca-cert.sh` and
Put the kubeconfig(s) on every node. The examples later in this
guide assume that there are kubeconfigs in `/var/lib/kube-proxy/kubeconfig` and
`/var/lib/kubelet/kubeconfig`.
## Configuring and Installing Base Software on Nodes
This section discusses how to configure machines to be kubernetes nodes.
You should run three daemons on every node:
- docker or rkt
- kubelet
- kube-proxy
You will also need to do assorted other configuration on top of a
base OS install.
Tip: One possible starting point is to setup a cluster using an existing Getting
Started Guide. After getting a cluster running, you can then copy the init.d scripts or systemd unit files from that
cluster, and then modify them for use on your custom cluster.
### Docker
The minimum required Docker version will vary as the kubelet version changes. The newest stable release is a good choice. Kubelet will log a warning and refuse to start pods if the version is too old, so pick a version and try it.
If you previously had Docker installed on a node without setting Kubernetes-specific
options, you may have a Docker-created bridge and iptables rules. You may want to remove these
as follows before proceeding to configure Docker for Kubernetes.
```
iptables -t nat -F
ifconfig docker0 down
brctl delbr docker0
```
The way you configure docker will depend in whether you have chosen the routable-vip or overlay-network approaches for your network.
Some docker options will want to think about:
- create your own bridge for the per-node CIDR ranges, and set `--bridge=cbr0` and `--bip=false`. Or let docker do it with `--bip=true`.
-`--iptables=false` so docker will not manipulate iptables for host-ports (too coarse on older docker versions, may be fixed in newer versions)
so that kube-proxy can manage iptables instead of docker.
-`--ip-masq=false`
- if you have setup PodIPs to be routable, then you want this false, otherwise, docker will
rewrite the PodIP source-address to a NodeIP.
- some environments (e.g. GCE) still need you to masquerade out-bound traffic when it leaves the cloud environment. This is very environment specific.
- if you are using an overlay network, consult those instructions.
-`--bip=`
- should be the CIDR range for pods for that specific node.
-`--mtu=`
- may be required when using Flannel, because of the extra packet size due to udp encapsulation
-`--insecure-registry $CLUSTER_SUBNET`
- to connect to a private registry, if you set one up, without using SSL.
You may want to increase the number of open files for docker:
-`DOCKER_NOFILE=1000000`
Ensure docker is working correctly on your system before proceeding with the rest of the
installation, by following examples given in the Docker documentation.
### rkt
[rkt](https://github.com/coreos/rkt) is an alterative to Docker. You only need to install one of Docker or rkt.
*TODO*: how to install and configure rkt.
### kubelet
All nodes should run kubelet. See [Selecting Binaries](#selecting-binaries).
Arguments to consider:
- If following the HTTPS security approach:
-`--api-servers=https://$MASTER_IP`
-`--kubeconfig=/var/lib/kubelet/kubeconfig`
- Otherwise, if taking the firewall-based security approach
-`--api-servers=http://$MASTER_IP`
-`--config=/etc/kubernetes/manifests` -%}
-`--cluster-dns=` to the address of the DNS server you will setup (see [Starting Addons](#starting-addons).)
-`--cluster-domain=` to the dns domain prefix to use for cluster DNS addresses.
-`--docker-root=`
-`--root-dir=`
-`--configure-cbr0=` (described above)
-`--register-node` (described in [Node](../node.md) documentation.
### kube-proxy
All nodes should run kube-proxy. (Running kube-proxy on a "master" node is not
strictly required, but being consistent is easier.) Obtain a binary as described for
kubelet.
Arguments to consider:
- If following the HTTPS security approach:
-`--api-servers=https://$MASTER_IP`
-`--kubeconfig=/var/lib/kube-proxy/kubeconfig`
- Otherwise, if taking the firewall-based security approach
-`--api-servers=http://$MASTER_IP`
### Networking
Each node needs to be allocated its own CIDR range for pod networking.
Call this $NODE_X_POD_CIDR.
A bridge called `cbr0` needs to be created on each node. The bridge is explained
further in the [networking documentation](../networking.md).
- Recommended, automatic approach:
1. Set `--configure-cbr0=true` option in kubelet init script and restart kubelet service. Kubelet will configure cbr0 automatically.
It will wait to do this until the node controller has set Node.Spec.PodCIDR. Since you have not setup apiserver and node controller
yet, the bridge will not be setup immediately.
- Alternate, manual approach:
1. Set `--configure-cbr0=false` on kubelet and restart.
1. Create a bridge
- e.g. `brctl addbr cbr0`.
1. Set appropriate MTU
-`ip link set dev cbr0 mtu 1460`
1. Add the clusters network to the bridge (docker will go on other side of bridge).
- e.g. `ip addr add $CLUSTER_CIDR dev eth0`
1. Turn it on
- e.g. `ip link set dev cbr0 up`
If you have turned off docker ip masquerading to allow pods to talk to each
other, then you may need to do masquerading just for destination IPs outside