This document describes how to deploy Kubernetes with Calico networking on _bare metal_ Ubuntu. For more information on Project Calico, visit [projectcalico.org](http://projectcalico.org) and the [calico-docker repository](https://github.com/projectcalico/calico-docker).
To install Calico on an existing Kubernetes cluster, or for more information on deploying Calico with Kubernetes in a number of other environments take a look at our supported [deployment guides](https://github.com/projectcalico/calico-docker/tree/master/docs/kubernetes).
This guide will set up a simple Kubernetes cluster with a single Kubernetes master and two Kubernetes nodes. We will start the following processes with systemd:
You should see the `apiserver`, `controller-manager` and `scheduler` containers running. It may take some time to download the docker images - you can check if the containers are running using `docker ps`.
### Install Calico on Master
We need to install Calico on the master so that the master can route to the pods in our Kubernetes cluster.
First, start the etcd instance used by Calico. We'll run this as a static Kubernetes pod. Before we install it, we'll need to edit the manifest. Open `calico-kubernetes-master/config/master/calico-etcd.manifest` and replace all instances of `<PRIVATE_IPV4>` with your master's IP address. Then, copy the file to the `/etc/kubernetes/manifests` directory.
> Note: For simplicity, in this demonstration we are using a single instance of etcd. In a production deployment a distributed etcd cluster is recommended for redundancy.
Most Kubernetes deployments will require the DNS addon for service discovery. For more on DNS service discovery, check [here](../../cluster/addons/dns/).
The config repository for this guide comes with manifest files to start the DNS addon. To install DNS, do the following on your Master node.
Replace `<MASTER_IP>` in `calico-kubernetes-master/config/master/dns/skydns-rc.yaml` with your Master's IP address. Then, create `skydns-rc.yaml` and `skydns-svc.yaml` using `kubectl create -f <FILE>`.
At this point, you have a fully functioning cluster running on Kubernetes with a master and two nodes networked with Calico. You can now follow any of the [standard documentation](../../examples/) to set up other services on your cluster.
Because containers in this guide have private `192.168.0.0/16` IPs, you will need NAT to allow connectivity between containers and the internet. However, in a production data center deployment, NAT is not always necessary, since Calico can peer with the data center's border routers over BGP.
The simplest method for enabling connectivity from containers to the internet is to use an `iptables` masquerade rule. This is the standard mechanism recommended in the [Kubernetes GCE environment](../../docs/admin/networking.md#google-compute-engine-gce).
We need to NAT traffic that has a destination outside of the cluster. Cluster-internal traffic includes the Kubernetes master/nodes, and the traffic within the container IP subnet. A suitable masquerade chain would follow this pattern below, replacing the following variables:
-`CONTAINER_SUBNET`: The cluster-wide subnet from which container IPs are chosen. Run `ETCD_AUTHORITY=127.0.0.1:6666 calicoctl pool show` on the Kubernetes master to find your configured container subnet.
In a data center environment, it is recommended to configure Calico to peer with the border routers over BGP. This means that the container IPs will be routable anywhere in the data center, and so NAT is not needed on the nodes (though it may be enabled at the data center edge to allow outbound-only internet connectivity).