If you are using monit, you should also install the monit daemon (```apt-get install monit```) and the [high-availability/monit-kubelet](high-availability/monit-kubelet) and
You can also validate that this is working with ```etcdctl set foo bar``` on one node, and ```etcd get foo```
on a different node.
### Even more reliable storage
Of course, if you are interested in increased data reliability, there are further options which makes the place where etcd
installs it's data even more reliable than regular disks (belts *and* suspenders, ftw!).
If you use a cloud provider, then they usually provide this
for you, for example [Persistent Disk](https://cloud.google.com/compute/docs/disks/persistent-disks) on the Google Cloud Platform. These
are block-device persistent storage that can be mounted onto your virtual machine. Other cloud providers provide similar solutions.
If you are running on physical machines, you can also use network attached redundant storage using an iSCSI or NFS interface.
Alternatively, you can run a clustered file system like Gluster or Ceph. Finally, you can also run a RAID array on each physical machine.
Regardless of how you choose to implement it, if you chose to use one of these options, you should make sure that your storage is mounted
to each machine. If your storage is shared between the three masters in your cluster, you should create a different directory on the storage
for each node. Throughout these instructions, we assume that this storage is mounted to your machine in ```/var/etcd/data```
## Replicated API Servers
Once you have replicated etcd set up correctly, we will also install the apiserver using the kubelet.
### Installing configuration files
First you need to create the initial log file, so that Docker mounts a file instead of a directory:
```
touch /var/log/kube-apiserver.log
```
Next, you need to create a ```/srv/kubernetes/``` directory on each node. This directory includes:
* basic_auth.csv - basic auth user and password
* ca.crt - Certificate Authority cert
* known_tokens.csv - tokens that entities (e.g. the kubelet) can use to talk to the apiserver
* kubecfg.crt - Client certificate, public key
* kubecfg.key - Client certificate, private key
* server.cert - Server certificate, public key
* server.key - Server certificate, private key
The easiest way to create this directory, may be to copy it from the master node of a working cluster, or you can manually generate these files yourself.
Once these files exist, copy the [kube-apiserver.yaml](high-availability/kube-apiserver.yaml) into ```/etc/kubernetes/manifests/``` on each master node.
The kubelet monitors this directory, and will automatically create an instance of the ```kube-apiserver``` container using the pod definition specified
in the file.
### Load balancing
At this point, you should have 3 apiservers all working correctly. If you set up a network load balancer, you should
be able to access your cluster via that load balancer, and see traffic balancing between the apiserver instances. Setting
up a load balancer will depend on the specifics of your platform, for example instructions for the Google Cloud
Platform can be found [here](https://cloud.google.com/compute/docs/load-balancing/)
Note, if you are using authentication, you may need to regenerate your certificate to include the IP address of the balancer,
in addition to the IP addresses of the individual nodes.
For pods that you deploy into the cluster, the ```kubernetes``` service/dns name should provide a load balanced endpoint for the master automatically.
For external users of the API (e.g. the ```kubectl``` command line interface, continuous build pipelines, or other clients) you will want to configure
them to talk to the external load balancer's IP address.
## Master elected components
So far we have set up state storage, and we have set up the API server, but we haven't run anything that actually modifies
cluster state, such as the controller manager and scheduler. To achieve this reliably, we only want to have one actor modifying state at a time, but we want replicated
instances of these actors, in case a machine dies. To achieve this, we are going to use a lease-lock in etcd to perform
master election. On each of the three apiserver nodes, we run a small utility application named ```podmaster```. It's job is to implement a master
election protocol using etcd "compare and swap". If the apiserver node wins the election, it starts the master component it is managing (e.g. the scheduler), if it
loses the election, it ensures that any master components running on the node (e.g. the scheduler) are stopped.
In the future, we expect to more tightly integrate this lease-locking into the scheduler and controller-manager binaries directly, as described in the [high availability design proposal](../proposals/high-availability.md)
by copying [kube-scheduler.yaml](high-availability/kube-scheduler.yaml) and [kube-controller-manager.yaml](high-availability/kube-controller-manager.yaml) into the ```/srv/kubernetes/```
Now that the configuration files are in place, copy the [podmaster.yaml](high-availability/podmaster.yaml) config file into ```/etc/kubernetes/manifests/```
As before, the kubelet on the node monitors this directory, and will start an instance of the podmaster using the pod specification provided in ```podmaster.yaml```.
It implements the major concepts (with a few minor reductions for simplicity), of the podmaster HA implementation alongside a quick smoke test using k8petstore.