## Cloud Native Deployments of Cassandra using Kubernetes
The following document describes the development of a _cloud native_ [Cassandra](http://cassandra.apache.org/) deployment on Kubernetes. When we say _cloud native_ we mean an application which understands that it is running within a cluster manager, and uses this cluster management infrastructure to help implement the application. In particular, in this instance, a custom Cassandra ```SeedProvider``` is used to enable Cassandra to dynamically discover new Cassandra nodes as they join the cluster.
This document also attempts to describe the core components of Kubernetes, _Pods_, _Services_ and _Replication Controllers_.
This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting started](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs/getting-started-guides) for installation instructions for your platform.
This is a somewhat long tutorial. If you want to jump straight to the "do it now" commands, please see the [tl; dr](#tl-dr) at the end.
### Simple Single Pod Cassandra Node
In Kubernetes, the atomic unit of an application is a [_Pod_](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/pods.md). A Pod is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes. In this simple case, we define a single container running Cassandra for our pod:
```yaml
id: cassandra
kind: Pod
apiVersion: v1beta1
desiredState:
manifest:
version: v1beta1
id: cassandra
containers:
- name: cassandra
image: kubernetes/cassandra
command:
- /run.sh
cpu: 1000
ports:
- name: cql
containerPort: 9042
- name: thrift
containerPort: 9160
env:
- key: MAX_HEAP_SIZE
value: 512M
- key: HEAP_NEWSIZE
value: 100M
labels:
name: cassandra
```
There are a few things to note in this description. First is that we are running the ```kubernetes/cassandra``` image. This is a standard Cassandra installation on top of Debian. However it also adds a custom [```SeedProvider```](https://svn.apache.org/repos/asf/cassandra/trunk/src/java/org/apache/cassandra/locator/SeedProvider.java) to Cassandra. In Cassandra, a ```SeedProvider``` bootstraps the gossip protocol that Cassandra uses to find other nodes. The ```KubernetesSeedProvider``` discovers the Kubernetes API Server using the built in Kubernetes discovery service, and then uses the Kubernetes API to find new nodes (more on this later)
You may also note that we are setting some Cassandra parameters (```MAX_HEAP_SIZE``` and ```HEAP_NEWSIZE```). We also tell Kubernetes that the container exposes both the ```CQL``` and ```Thrift``` API ports. Finally, we tell the cluster manager that we need 1000 milli-cpus (1 core).
Given this configuration, we can create the pod as follows
```sh
$ kubectl create -f cassandra.yaml
```
After a few moments, you should be able to see the pod running:
In Kubernetes a _Service_ describes a set of Pods that perform the same task. For example, the set of nodes in a Cassandra cluster, or even the single node we created above. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods (or the single Pod we've already created) available via the Kubernetes API. This is the way that we use initially use Services with Cassandra.
The important thing to note here is the ```selector```. It is a query over labels, that identifies the set of _Pods_ contained by the _Service_. In this case the selector is ```name=cassandra```. If you look back at the Pod specification above, you'll see that the pod has the corresponding label, so it will be selected for membership in this Service.
You can see that the _Service_ has found the pod we created in step one.
### Adding replicated nodes
Of course, a single node cluster isn't particularly interesting. The real power of Kubernetes and Cassandra lies in easily building a replicated, resizable Cassandra cluster.
In Kubernetes a _Replication Controller_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
Replication Controllers will "adopt" existing pods that match their selector query, so let's create a Replication Controller with a single replica to adopt our existing Cassandra Pod.
```yaml
id: cassandra
kind: ReplicationController
apiVersion: v1beta1
desiredState:
replicas: 1
replicaSelector:
name: cassandra
# This is identical to the pod config above
podTemplate:
desiredState:
manifest:
version: v1beta1
id: cassandra
containers:
- name: cassandra
image: kubernetes/cassandra
command:
- /run.sh
cpu: 1000
ports:
- name: cql
containerPort: 9042
- name: thrift
containerPort: 9160
env:
- key: MAX_HEAP_SIZE
value: 512M
- key: HEAP_NEWSIZE
value: 100M
labels:
name: cassandra
```
The bulk of the replication controller config is actually identical to the Cassandra pod declaration above, it simply gives the controller a recipe to use when creating new pods. The other parts are the ```replicaSelector``` which contains the controller's selector query, and the ```replicas``` parameter which specifies the desired number of replicas, in this case 1.
Create this controller:
```sh
$ kubectl create -f cassandra-controller.yaml
```
Now this is actually not that interesting, since we haven't actually done anything new. Now it will get interesting.
Notice that one of the pods has the human readable name ```cassandra``` that you specified in your config before, and one has a random string, since it was named by the replication controller.
To prove that this all works, you can use the ```nodetool``` command to examine the status of the cluster, for example:
```sh
$ ssh 1.2.3.4
$ docker exec <cassandra-container-id> nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 10.244.3.29 72.07 KB 256 100.0% f736f0b5-bd1f-46f1-9b9d-7e8f22f37c9e rack1
UN 10.244.1.10 41.14 KB 256 100.0% 42617acd-b16e-4ee3-9486-68a6743657b1 rack1