phase 2 of cassandra example overhaul

pull/6/head
Amy Unruh 2016-03-10 09:26:39 -08:00
parent d800dca7f8
commit 8846b313dc
9 changed files with 250 additions and 294 deletions

View File

@ -1,3 +1,4 @@
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- BEGIN STRIP_FOR_RELEASE -->
@ -32,7 +33,19 @@ Documentation for other releases can be found at
<!-- END MUNGE: UNVERSIONED_WARNING -->
## Cloud Native Deployments of Cassandra using Kubernetes
# Cloud Native Deployments of Cassandra using Kubernetes
## Table of Contents
- [Prerequisites](#prerequisites)
- [tl;dr Quickstart](#tldr-quickstart)
- [Step 1: Create a Cassandra Service](#step-1-create-a-cassandra-service)
- [Step 2: Use a Replication Controller to create Cassandra node pods](#step-2-use-a-replication-controller-to-create-cassandra-node-pods)
- [Step 3: Scale up the Cassandra cluster](#step-3-scale-up-the-cassandra-cluster)
- [Step 4: Delete the Replication Controller](#step-4-delete-the-replication-controller)
- [Step 5: Use a DaemonSet instead of a Replication Controller](#step-5-use-a-daemonset-instead-of-a-replication-controller)
- [Step 6: Resource Cleanup](#step-6-resource-cleanup)
- [Seed Provider Source](#seed-provider-source)
The following document describes the development of a _cloud native_
[Cassandra](http://cassandra.apache.org/) deployment on Kubernetes. When we say
@ -46,114 +59,63 @@ This example also uses some of the core components of Kubernetes:
- [_Pods_](../../docs/user-guide/pods.md)
- [ _Services_](../../docs/user-guide/services.md)
- [_Replication Controllers_](../../docs/user-guide/replication-controller.md).
- [_Replication Controllers_](../../docs/user-guide/replication-controller.md)
- [_Daemon Sets_](../../docs/admin/daemons.md)
### Prerequisites
## Prerequisites
This example assumes that you have a Kubernetes cluster installed and running,
This example assumes that you have a Kubernetes version >=1.2 cluster installed and running,
and that you have installed the [`kubectl`](../../docs/user-guide/kubectl/kubectl.md)
command line tool somewhere in your path. Please see the
[getting started guides](../../docs/getting-started-guides/)
for installation instructions for your platform.
This example also has a few code and configuration files needed. To avoid
typing these out, you can `git clone` the Kubernetes repository to you local
typing these out, you can `git clone` the Kubernetes repository to your local
computer.
### A note for the impatient
## tl;dr Quickstart
This is a somewhat long tutorial. If you want to jump straight to the "do it
now" commands, please see the [tl; dr](#tl-dr) at the end.
If you want to jump straight to the commands we will run,
here are the steps:
### Simple Single Pod Cassandra Node
```sh
# create a service to track all cassandra nodes
kubectl create -f examples/cassandra/cassandra-service.yaml
In Kubernetes, the atomic unit of an application is a
[_Pod_](../../docs/user-guide/pods.md).
A Pod is one or more containers that _must_ be scheduled onto
the same host. All containers in a pod share a network namespace, and may
optionally share mounted volumes.
# create a replication controller to replicate cassandra nodes
kubectl create -f examples/cassandra/cassandra-controller.yaml
In this simple case, we define a single container running Cassandra for our pod:
# validate the Cassandra cluster. Substitute the name of one of your pods.
kubectl exec -ti cassandra-xxxxx -- nodetool status
<!-- BEGIN MUNGE: EXAMPLE cassandra.yaml -->
# scale up the Cassandra cluster
kubectl scale rc cassandra --replicas=4
```yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: cassandra
name: cassandra
spec:
containers:
- args:
- /run.sh
resources:
limits:
cpu: "0.1"
image: gcr.io/google-samples/cassandra:v8
name: cassandra
ports:
- name: cql
containerPort: 9042
- name: thrift
containerPort: 9160
volumeMounts:
- name: data
mountPath: /cassandra_data
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: data
emptyDir: {}
# delete the replication controller
kubectl delete rc cassandra
# then, create a daemonset to place a cassandra node on each kubernetes node
kubectl create -f examples/cassandra/cassandra-daemonset.yaml --validate=false
# resource cleanup
kubectl delete service -l app=cassandra
kubectl delete daemonset cassandra
```
[Download example](cassandra.yaml?raw=true)
<!-- END MUNGE: EXAMPLE cassandra.yaml -->
There are a few things to note in this description. First is that we are
running the [```gcr.io/google-samples/cassandra:v8```](image/Dockerfile)
image from Google's [container registry](https://cloud.google.com/container-registry/docs/).
This is a standard Cassandra installation on top of Debian. However it also
adds a custom
[`SeedProvider`](https://svn.apache.org/repos/asf/cassandra/trunk/src/java/org/apache/cassandra/locator/SeedProvider.java) to Cassandra. In
Cassandra, a ```SeedProvider``` bootstraps the gossip protocol that Cassandra
uses to find other nodes.
The [`KubernetesSeedProvider`](java/src/io/k8s/cassandra/KubernetesSeedProvider.java)
discovers the Kubernetes API Server using the built in Kubernetes
discovery service, and then uses the Kubernetes API to find new nodes (more on
this later). See the [image](image/) directory of this example for specifics on
how the container image was built and what it contains.
## Step 1: Create a Cassandra Service
You may also note that we are setting some Cassandra parameters (`MAX_HEAP_SIZE`
and `HEAP_NEWSIZE`) and adding information about the
[namespace](../../docs/user-guide/namespaces.md).
We also tell Kubernetes that the container exposes
both the `CQL` and `Thrift` API ports. Finally, we tell the cluster
manager that we need 0.1 cpu (0.1 core).
A Kubernetes _[Service](../../docs/user-guide/services.md)_ describes a set of
[_Pods_](../../docs/user-guide/pods.md) that perform the same task. In
Kubernetes, the atomic unit of an application is a Pod: one or more containers
that _must_ be scheduled onto the same host.
In theory, we could create a single Cassandra pod right now, but since
`KubernetesSeedProvider` needs to learn what nodes are in the Cassandra
deployment we need to create a service first.
### Cassandra Service
In Kubernetes, a _[Service](../../docs/user-guide/services.md)_ describes a set
of Pods that perform the same task. For example, the set of Pods in a Cassandra
cluster can be a Kubernetes Service, or even just the single Pod we created
above. An important use for a Service is to create a load balancer which
distributes traffic across members of the set of Pods. But a _Service_ can also
be used as a standing query which makes a dynamically changing set of Pods (or
the single Pod we've already created) available via the Kubernetes API. This is
the way that we use initially use Services with Cassandra.
An important use for a Service is to create a load balancer which
distributes traffic across members of the set of Pods. But a Service can also
be used as a standing query which makes a dynamically changing set of Pods
available via the Kubernetes API. We'll show that in this example.
Here is the service description:
@ -176,78 +138,38 @@ spec:
[Download example](cassandra-service.yaml?raw=true)
<!-- END MUNGE: EXAMPLE cassandra-service.yaml -->
The important thing to note here is the `selector`. It is a query over
labels, that identifies the set of _Pods_ contained by the _Service_. In this
case the selector is `app=cassandra`. If you look back at the Pod
specification above, you'll see that the pod has the corresponding label, so it
will be selected for membership in this Service.
An important thing to note here is the `selector`. It is a query over labels,
that identifies the set of Pods contained by this Service. In this case the
selector is `app=cassandra`. If there are any pods with that label, they will be
selected for membership in this service. We'll see that in action shortly.
Create this service as follows:
Create the Cassandra service as follows:
```console
$ kubectl create -f examples/cassandra/cassandra-service.yaml
```
Now, as the service is running, we can create the first Cassandra pod using the mentioned specification.
```console
$ kubectl create -f examples/cassandra/cassandra.yaml
```
## Step 2: Use a Replication Controller to create Cassandra node pods
After a few moments, you should be able to see the pod running, plus its single container:
As we noted above, in Kubernetes, the atomic unit of an application is a
[_Pod_](../../docs/user-guide/pods.md).
A Pod is one or more containers that _must_ be scheduled onto
the same host. All containers in a pod share a network namespace, and may
optionally share mounted volumes.
```console
$ kubectl get pods cassandra
NAME READY STATUS RESTARTS AGE
cassandra 1/1 Running 0 55s
```
You can also query the service endpoints to check if the pod has been correctly selected.
```console
$ kubectl get endpoints cassandra -o yaml
apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: 2015-06-21T22:34:12Z
labels:
app: cassandra
name: cassandra
namespace: default
resourceVersion: "944373"
selfLink: /api/v1/namespaces/default/endpoints/cassandra
uid: a3d6c25f-1865-11e5-a34e-42010af01bcc
subsets:
- addresses:
- ip: 10.244.3.15
targetRef:
kind: Pod
name: cassandra
namespace: default
resourceVersion: "944372"
uid: 9ef9895d-1865-11e5-a34e-42010af01bcc
ports:
- port: 9042
protocol: TCP
```
### Adding replicated nodes
Of course, a single node cluster isn't particularly interesting. The real power
of Kubernetes and Cassandra lies in easily building a replicated, scalable
Cassandra cluster.
In Kubernetes a
A Kubernetes
_[Replication Controller](../../docs/user-guide/replication-controller.md)_
is responsible for replicating sets of identical pods. Like a
_Service_, it has a selector query which identifies the members of its set.
Unlike a _Service_, it also has a desired number of replicas, and it will create
or delete _Pods_ to ensure that the number of _Pods_ matches up with its
Service, it has a selector query which identifies the members of its set.
Unlike a Service, it also has a desired number of replicas, and it will create
or delete Pods to ensure that the number of Pods matches up with its
desired state.
Replication controllers will "adopt" existing pods that match their selector
query, so let's create a replication controller with a single replica to adopt
our existing Cassandra pod.
The Replication Controller, in conjunction with the Service we just defined,
will let us easily build a replicated, scalable Cassandra cluster.
Let's create a replication controller with two initial replicas.
<!-- BEGIN MUNGE: EXAMPLE cassandra-controller.yaml -->
@ -255,13 +177,17 @@ our existing Cassandra pod.
apiVersion: v1
kind: ReplicationController
metadata:
labels:
app: cassandra
name: cassandra
# The labels will be applied automatically
# from the labels in the pod template, if not set
# labels:
# app: cassandra
spec:
replicas: 2
selector:
app: cassandra
# The selector will be applied automatically
# from the labels in the pod template, if not set.
# selector:
# app: cassandra
template:
metadata:
labels:
@ -300,47 +226,104 @@ spec:
[Download example](cassandra-controller.yaml?raw=true)
<!-- END MUNGE: EXAMPLE cassandra-controller.yaml -->
Most of this replication controller definition is identical to the Cassandra pod
definition above; it simply gives the replication controller a recipe to use
when it creates new Cassandra pods. The other differentiating parts are the
`selector` attribute which contains the controller's selector query, and the
`replicas` attribute which specifies the desired number of replicas, in this
case 1.
There are a few things to note in this description.
Create this controller:
The `selector` attribute contains the controller's selector query. It can be
explicitly specified, or applied automatically from the labels in the pod
template if not set, as is done here.
The pod template's label, `app:cassandra`, matches matches the Service selector
from Step 1. This is how pods created by this replication controller are picked up
by the Service."
The `replicas` attribute specifies the desired number of replicas, in this
case 2 initially. We'll scale up to more shortly.
The replica's pods are using the [```gcr.io/google-samples/cassandra:v8```](image/Dockerfile)
image from Google's [container registry](https://cloud.google.com/container-registry/docs/).
This is a standard Cassandra installation on top of Debian. However, it also
adds a custom
[`SeedProvider`](https://svn.apache.org/repos/asf/cassandra/trunk/src/java/org/apache/cassandra/locator/SeedProvider.java) to Cassandra. In
Cassandra, a ```SeedProvider``` bootstraps the gossip protocol that Cassandra
uses to find other nodes.
The [`KubernetesSeedProvider`](java/src/io/k8s/cassandra/KubernetesSeedProvider.java)
discovers the Kubernetes API Server using the built in Kubernetes
discovery service, and then uses the Kubernetes API to find new nodes.
See the [image](image/) directory of this example for specifics on
how the container image was built and what it contains.
You may also note that we are setting some Cassandra parameters (`MAX_HEAP_SIZE`
and `HEAP_NEWSIZE`), and adding information about the
[namespace](../../docs/user-guide/namespaces.md).
We also tell Kubernetes that the container exposes
both the `CQL` and `Thrift` API ports. Finally, we tell the cluster
manager that we need 0.1 cpu (0.1 core).
Create the Replication Controller:
```console
$ kubectl create -f examples/cassandra/cassandra-controller.yaml
```
Now this is actually not that interesting, since we haven't actually done
anything new. Now it will get interesting.
Let's scale our cluster to 2:
You can list the new controller:
```console
$ kubectl scale rc cassandra --replicas=2
$ kubectl get rc -o wide
NAME DESIRED CURRENT AGE CONTAINER(S) IMAGE(S) SELECTOR
cassandra 2 2 11s cassandra gcr.io/google-samples/cassandra:v8 app=cassandra
```
Now if you list the pods in your cluster, and filter to the label `app=cassandra`, you should see two cassandra pods:
Now if you list the pods in your cluster, and filter to the label
`app=cassandra`, you should see two Cassandra pods. (The `wide` argument lets
you see which Kubernetes nodes the pods were scheduled onto.)
```console
$ kubectl get pods -l="app=cassandra"
NAME READY STATUS RESTARTS AGE
cassandra 1/1 Running 0 3m
cassandra-af6h5 1/1 Running 0 28s
$ kubectl get pods -l="app=cassandra" -o wide
NAME READY STATUS RESTARTS AGE NODE
cassandra-21qyy 1/1 Running 0 1m kubernetes-minion-b286
cassandra-q6sz7 1/1 Running 0 1m kubernetes-minion-9ye5
```
Notice that one of the pods has the human-readable name `cassandra` that you
specified in your config before, and one has a random string, since it was named
by the replication controller.
Because these pods have the label `app=cassandra`, they map to the service we
defined in Step 1.
To prove that this all works, you can use the `nodetool` command to examine the
status of the cluster. To do this, use the `kubectl exec` command to run
`nodetool` in one of your Cassandra pods.
You can check that the Pods are visible to the Service using the following service endpoints query:
```console
$ kubectl exec -ti cassandra -- nodetool status
$ kubectl get endpoints cassandra -o yaml
apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: 2015-06-21T22:34:12Z
labels:
app: cassandra
name: cassandra
namespace: default
resourceVersion: "944373"
selfLink: /api/v1/namespaces/default/endpoints/cassandra
uid: a3d6c25f-1865-11e5-a34e-42010af01bcc
subsets:
- addresses:
- ip: 10.244.3.15
targetRef:
kind: Pod
name: cassandra
namespace: default
resourceVersion: "944372"
uid: 9ef9895d-1865-11e5-a34e-42010af01bcc
ports:
- port: 9042
protocol: TCP
```
To show that the `SeedProvider` logic is working as intended, you can use the
`nodetool` command to examine the status of the Cassandra cluster. To do this,
use the `kubectl exec` command, which lets you run `nodetool` in one of your
Cassandra pods. Again, substitute `cassandra-xxxxx` with the actual name of one
of your pods.
```console
$ kubectl exec -ti cassandra-xxxxx -- nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
@ -350,36 +333,53 @@ UN 10.244.0.5 74.09 KB 256 100.0% 86feda0f-f070-4a5b-bda1-2ee
UN 10.244.3.3 51.28 KB 256 100.0% dafe3154-1d67-42e1-ac1d-78e7e80dce2b rack1
```
Now let's scale our cluster to 4 nodes:
## Step 3: Scale up the Cassandra cluster
Now let's scale our Cassandra cluster to 4 pods. We do this by telling the
Replication Controller that we now want 4 replicas.
```sh
$ kubectl scale rc cassandra --replicas=4
```
In a few moments, you can examine the status again:
You can see the new pods listed:
```sh
$ kubectl exec -ti cassandra -- nodetool status
```console
$ kubectl get pods -l="app=cassandra" -o wide
NAME READY STATUS RESTARTS AGE NODE
cassandra-21qyy 1/1 Running 0 6m kubernetes-minion-b286
cassandra-81m2l 1/1 Running 0 47s kubernetes-minion-b286
cassandra-8qoyp 1/1 Running 0 47s kubernetes-minion-9ye5
cassandra-q6sz7 1/1 Running 0 6m kubernetes-minion-9ye5
```
In a few moments, you can examine the Cassandra cluster status again, and see
that the new pods have been detected by the custom `SeedProvider`:
```console
$ kubectl exec -ti cassandra-xxxxx -- nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 10.244.2.3 57.61 KB 256 49.1% 9d560d8e-dafb-4a88-8e2f-f554379c21c3 rack1
UN 10.244.1.7 41.1 KB 256 50.2% 68b8cc9c-2b76-44a4-b033-31402a77b839 rack1
UN 10.244.0.5 74.09 KB 256 49.7% 86feda0f-f070-4a5b-bda1-2eeb0ad08b77 rack1
UN 10.244.3.3 51.28 KB 256 51.0% dafe3154-1d67-42e1-ac1d-78e7e80dce2b rack1
UN 10.244.0.6 51.67 KB 256 48.9% d07b23a5-56a1-4b0b-952d-68ab95869163 rack1
UN 10.244.1.5 84.71 KB 256 50.7% e060df1f-faa2-470c-923d-ca049b0f3f38 rack1
UN 10.244.1.6 84.71 KB 256 47.0% 83ca1580-4f3c-4ec5-9b38-75036b7a297f rack1
UN 10.244.0.5 68.2 KB 256 53.4% 72ca27e2-c72c-402a-9313-1e4b61c2f839 rack1
```
### Using a DaemonSet
## Step 4: Delete the Replication Controller
Before you start this section, __delete the replication controller__ you created above:
Before you start Step 5, __delete the replication controller__ you created above:
```sh
$ kubectl delete rc cassandra
```
In Kubernetes a _[Daemon Set](../../docs/admin/daemons.md)_ can distribute pods
## Step 5: Use a DaemonSet instead of a Replication Controller
In Kubernetes, a [_Daemon Set_](../../docs/admin/daemons.md) can distribute pods
onto Kubernetes nodes, one-to-one. Like a _ReplicationController_, it has a
selector query which identifies the members of its set. Unlike a
_ReplicationController_, it has a node selector to limit which nodes are
@ -393,7 +393,7 @@ case that an instance dies, the data stored on the instance does not, and the
cluster can react by re-replicating the data to other running nodes.
`DaemonSet` is designed to place a single pod on each node in the Kubernetes
cluster. If you're looking for data redundancy with Cassandra, let's create a
cluster. That will give us data redundancy. Let's create a
daemonset to start our storage cluster:
<!-- BEGIN MUNGE: EXAMPLE cassandra-daemonset.yaml -->
@ -447,12 +447,13 @@ spec:
[Download example](cassandra-daemonset.yaml?raw=true)
<!-- END MUNGE: EXAMPLE cassandra-daemonset.yaml -->
Most of this daemon set definition is identical to the Cassandra pod and
ReplicationController definitions above; it simply gives the daemon set a recipe
to use when it creates new Cassandra pods, and targets all Cassandra nodes in
the cluster. The other differentiating part from a Replication Controller is
the `nodeSelector` attribute which allows the daemonset to target a specific
subset of nodes, and the lack of a `replicas` attribute due to the 1 to 1 node-
Most of this Daemonset definition is identical to the ReplicationController
definition above; it simply gives the daemon set a recipe to use when it creates
new Cassandra pods, and targets all Cassandra nodes in the cluster.
Differentiating aspects are the `nodeSelector` attribute, which allows the
Daemonset to target a specific subset of nodes (you can label nodes just like
other resources), and the lack of a `replicas` attribute due to the 1-to-1 node-
pod relationship.
Create this daemonset:
@ -467,24 +468,32 @@ You may need to disable config file validation, like so:
$ kubectl create -f examples/cassandra/cassandra-daemonset.yaml --validate=false
```
Now, if you list the pods in your cluster, and filter to the label
`app=cassandra`, you should see one new cassandra pod for each node in your
network.
You can see the daemonset running:
```console
$ kubectl get pods -l="app=cassandra"
NAME READY STATUS RESTARTS AGE
cassandra-af6h5 1/1 Running 0 28s
cassandra-2jq1b 1/1 Running 0 32s
cassandra-34j2a 1/1 Running 0 29s
$ kubectl get daemonset
NAME DESIRED CURRENT NODE-SELECTOR
cassandra 3 3 <none>
```
To prove that this all works, you can use the `nodetool` command to examine the
status of the cluster. To do this, use the `kubectl exec` command to run
`nodetool` in one of your Cassandra pods.
Now, if you list the pods in your cluster, and filter to the label
`app=cassandra`, you should see one (and only one) new cassandra pod for each
node in your network.
```console
$ kubectl exec -ti cassandra-af6h5 -- nodetool status
$ kubectl get pods -l="app=cassandra" -o wide
NAME READY STATUS RESTARTS AGE NODE
cassandra-ico4r 1/1 Running 0 4s kubernetes-minion-rpo1
cassandra-kitfh 1/1 Running 0 1s kubernetes-minion-9ye5
cassandra-tzw89 1/1 Running 0 2s kubernetes-minion-b286
```
To prove that this all worked as intended, you can again use the `nodetool`
command to examine the status of the cluster. To do this, use the `kubectl
exec` command to run `nodetool` in one of your newly-launched cassandra pods.
```console
$ kubectl exec -ti cassandra-xxxxx -- nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
@ -495,39 +504,28 @@ UN 10.244.4.2 32.45 KB 256 100.0% 0b1be71a-6ffb-4895-ac3e-b97
UN 10.244.3.3 51.28 KB 256 100.0% dafe3154-1d67-42e1-ac1d-78e7e80dce2b rack1
```
### tl; dr;
**Note**: This example had you delete the cassandra Replication Controller before
you created the Daemonset. This is because to keep this example simple the
RC and the Daemonset are using the same `app=cassandra` label (so that their pods map to the
service we created, and so that the SeedProvider can identify them).
For those of you who are impatient, here is the summary of the commands we ran in this tutorial.
If we didn't delete the RC first, the two resources would conflict with
respect to how many pods they wanted to have running. If we wanted, we could support running
both together by using additional labels and selectors.
```sh
# create a service to track all cassandra nodes
kubectl create -f examples/cassandra/cassandra-service.yaml
## Step 6: Resource Cleanup
# create a single cassandra node
kubectl create -f examples/cassandra/cassandra.yaml
When you are ready to take down your resources, do the following:
# create a replication controller to replicate cassandra nodes
kubectl create -f examples/cassandra/cassandra-controller.yaml
# scale up to 2 nodes
kubectl scale rc cassandra --replicas=2
# validate the cluster
kubectl exec -ti cassandra -- nodetool status
# scale up to 4 nodes
kubectl scale rc cassandra --replicas=4
# delete the replication controller
kubectl delete rc cassandra
# then create a daemonset to place a cassandra node on each kubernetes node
kubectl create -f examples/cassandra/cassandra-daemonset.yaml
```console
$ kubectl delete service -l app=cassandra
$ kubectl delete daemonset cassandra
```
### Seed Provider Source
## Seed Provider Source
See [here](java/src/io/k8s/cassandra/KubernetesSeedProvider.java).
The Seed Provider source is
[here](java/src/io/k8s/cassandra/KubernetesSeedProvider.java).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -1,13 +1,17 @@
apiVersion: v1
kind: ReplicationController
metadata:
labels:
app: cassandra
name: cassandra
# The labels will be applied automatically
# from the labels in the pod template, if not set
# labels:
# app: cassandra
spec:
replicas: 2
selector:
app: cassandra
# The selector will be applied automatically
# from the labels in the pod template, if not set.
# selector:
# app: cassandra
template:
metadata:
labels:

View File

@ -1,35 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
labels:
app: cassandra
name: cassandra
spec:
containers:
- args:
- /run.sh
resources:
limits:
cpu: "0.1"
image: gcr.io/google-samples/cassandra:v8
name: cassandra
ports:
- name: cql
containerPort: 9042
- name: thrift
containerPort: 9160
volumeMounts:
- name: data
mountPath: /cassandra_data
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: data
emptyDir: {}

View File

@ -262,7 +262,6 @@ func TestExampleObjectSchemas(t *testing.T) {
"cassandra-daemonset": &extensions.DaemonSet{},
"cassandra-controller": &api.ReplicationController{},
"cassandra-service": &api.Service{},
"cassandra": &api.Pod{},
},
"../examples/celery-rabbitmq": {
"celery-controller": &api.ReplicationController{},

View File

@ -1674,9 +1674,9 @@ __EOF__
#####################
kube::log::status "Testing resource aliasing"
kubectl create -f examples/cassandra/cassandra.yaml "${kube_flags[@]}"
kubectl create -f examples/cassandra/cassandra-controller.yaml "${kube_flags[@]}"
kubectl create -f examples/cassandra/cassandra-service.yaml "${kube_flags[@]}"
kube::test::get_object_assert "all -l'app=cassandra'" "{{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}}" 'cassandra:cassandra:'
kube::test::get_object_assert "all -l'app=cassandra'" "{{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}}" 'cassandra:cassandra:cassandra:cassandra:'
kubectl delete all -l app=cassandra "${kube_flags[@]}"

View File

@ -477,7 +477,7 @@ func TestAnnotateObjectFromFile(t *testing.T) {
switch req.Method {
case "GET":
switch req.URL.Path {
case "/namespaces/test/pods/cassandra":
case "/namespaces/test/replicationcontrollers/cassandra":
return &http.Response{StatusCode: 200, Body: objBody(codec, &pods.Items[0])}, nil
default:
t.Fatalf("unexpected request: %#v\n%#v", req.URL, req)
@ -485,7 +485,7 @@ func TestAnnotateObjectFromFile(t *testing.T) {
}
case "PATCH":
switch req.URL.Path {
case "/namespaces/test/pods/cassandra":
case "/namespaces/test/replicationcontrollers/cassandra":
return &http.Response{StatusCode: 200, Body: objBody(codec, &pods.Items[0])}, nil
default:
t.Fatalf("unexpected request: %#v\n%#v", req.URL, req)
@ -504,7 +504,7 @@ func TestAnnotateObjectFromFile(t *testing.T) {
cmd := NewCmdAnnotate(f, buf)
cmd.SetOutput(buf)
options := &AnnotateOptions{}
options.filenames = []string{"../../../examples/cassandra/cassandra.yaml"}
options.filenames = []string{"../../../examples/cassandra/cassandra-controller.yaml"}
args := []string{"a=b", "c-"}
if err := options.Complete(f, buf, cmd, args); err != nil {
t.Fatalf("unexpected error: %v", err)

View File

@ -337,7 +337,7 @@ func TestGetObjectsIdentifiedByFile(t *testing.T) {
cmd := NewCmdGet(f, buf)
cmd.SetOutput(buf)
cmd.Flags().Set("filename", "../../../examples/cassandra/cassandra.yaml")
cmd.Flags().Set("filename", "../../../examples/cassandra/cassandra-controller.yaml")
cmd.Run(cmd, []string{})
expected := []runtime.Object{&pods.Items[0]}
@ -789,9 +789,9 @@ func TestWatchResourceIdentifiedByFile(t *testing.T) {
Codec: codec,
Client: fake.CreateHTTPClient(func(req *http.Request) (*http.Response, error) {
switch req.URL.Path {
case "/namespaces/test/pods/cassandra":
case "/namespaces/test/replicationcontrollers/cassandra":
return &http.Response{StatusCode: 200, Body: objBody(codec, &pods[0])}, nil
case "/watch/namespaces/test/pods/cassandra":
case "/watch/namespaces/test/replicationcontrollers/cassandra":
return &http.Response{StatusCode: 200, Body: watchBody(codec, events)}, nil
default:
t.Fatalf("unexpected request: %#v\n%#v", req.URL, req)
@ -805,7 +805,7 @@ func TestWatchResourceIdentifiedByFile(t *testing.T) {
cmd.SetOutput(buf)
cmd.Flags().Set("watch", "true")
cmd.Flags().Set("filename", "../../../examples/cassandra/cassandra.yaml")
cmd.Flags().Set("filename", "../../../examples/cassandra/cassandra-controller.yaml")
cmd.Run(cmd, []string{})
expected := []runtime.Object{&pods[0], events[0].Object, events[1].Object}

View File

@ -333,7 +333,7 @@ func TestLabelForResourceFromFile(t *testing.T) {
switch req.Method {
case "GET":
switch req.URL.Path {
case "/namespaces/test/pods/cassandra":
case "/namespaces/test/replicationcontrollers/cassandra":
return &http.Response{StatusCode: 200, Body: objBody(codec, &pods.Items[0])}, nil
default:
t.Fatalf("unexpected request: %#v\n%#v", req.URL, req)
@ -341,7 +341,7 @@ func TestLabelForResourceFromFile(t *testing.T) {
}
case "PATCH":
switch req.URL.Path {
case "/namespaces/test/pods/cassandra":
case "/namespaces/test/replicationcontrollers/cassandra":
return &http.Response{StatusCode: 200, Body: objBody(codec, &pods.Items[0])}, nil
default:
t.Fatalf("unexpected request: %#v\n%#v", req.URL, req)
@ -359,7 +359,7 @@ func TestLabelForResourceFromFile(t *testing.T) {
buf := bytes.NewBuffer([]byte{})
cmd := NewCmdLabel(f, buf)
options := &LabelOptions{
Filenames: []string{"../../../examples/cassandra/cassandra.yaml"},
Filenames: []string{"../../../examples/cassandra/cassandra-controller.yaml"},
}
err := RunLabel(f, buf, cmd, []string{"a=b"}, options)

View File

@ -208,24 +208,14 @@ var _ = framework.KubeDescribe("[Feature:Example]", func() {
return filepath.Join(framework.TestContext.RepoRoot, "examples", "cassandra", file)
}
serviceYaml := mkpath("cassandra-service.yaml")
podYaml := mkpath("cassandra.yaml")
controllerYaml := mkpath("cassandra-controller.yaml")
nsFlag := fmt.Sprintf("--namespace=%v", ns)
By("Starting the cassandra service and pod")
By("Starting the cassandra service")
framework.RunKubectlOrDie("create", "-f", serviceYaml, nsFlag)
framework.RunKubectlOrDie("create", "-f", podYaml, nsFlag)
framework.Logf("waiting for first cassandra pod")
err := framework.WaitForPodRunningInNamespace(c, "cassandra", ns)
Expect(err).NotTo(HaveOccurred())
framework.Logf("waiting for thrift listener online")
_, err = framework.LookForStringInLog(ns, "cassandra", "cassandra", "Listening for thrift clients", serverStartTimeout)
Expect(err).NotTo(HaveOccurred())
framework.Logf("wait for service")
err = framework.WaitForEndpoint(c, ns, "cassandra")
err := framework.WaitForEndpoint(c, ns, "cassandra")
Expect(err).NotTo(HaveOccurred())
// Create an RC with n nodes in it. Each node will then be verified.