Merge pull request #46362 from sebgoa/examplesmove

Automatic merge from submit-queue

Redirect all files in /examples folder to kubernetes/examples repo

**What this PR does / why we need it**:

Examples are being moved to their own repository: https://github.com/kubernetes/examples

We need to remove them from the main repo , but first we need to keep a redirect.

This is a *big* organizational change, but nothing technical (aside from e2e tests)

**Which issue this PR fixes** 

fixes part of #24343 

**Special notes for your reviewer**:

WIP, I still need to figure out what to do with the BUILD script and tests, plus take care of the e2e tests that use some of these examples.

**release notes**
```release-note
Redirect all examples README to the the kubernetes/examples repo
```
pull/6/head
Kubernetes Submit Queue 2017-07-14 09:03:25 -07:00 committed by GitHub
commit cb712e41d4
59 changed files with 59 additions and 10682 deletions

View File

@ -1,29 +1 @@
# Kubernetes Examples: releases.k8s.io/HEAD
This directory contains a number of examples of how to run
real applications with Kubernetes.
Demonstrations of how to use specific Kubernetes features can be found in our [documents](https://kubernetes.io/docs/).
### Maintained Examples
Maintained Examples are expected to be updated with every Kubernetes
release, to use the latest and greatest features, current guidelines
and best practices, and to refresh command syntax, output, changed
prerequisites, as needed.
|Name | Description | Notable Features Used | Complexity Level|
------------- | ------------- | ------------ | ------------ |
|[Guestbook](guestbook/) | PHP app with Redis | Replication Controller, Service | Beginner |
|[WordPress](mysql-wordpress-pd/) | WordPress with MySQL | Deployment, Persistent Volume with Claim | Beginner|
|[Cassandra](storage/cassandra/) | Cloud Native Cassandra | Daemon Set | Intermediate
* Note: Please add examples to the list above that are maintained.
See [Example Guidelines](guidelines.md) for a description of what goes
in this directory, and what examples should contain.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/README.md](https://github.com/kubernetes/examples/blob/master/README.md)

View File

@ -1,182 +1 @@
## Kubernetes DNS example
This is a toy example demonstrating how to use kubernetes DNS.
### Step Zero: Prerequisites
This example assumes that you have forked the repository and [turned up a Kubernetes cluster](https://kubernetes.io/docs/getting-started-guides/). Make sure DNS is enabled in your setup, see [DNS doc](https://github.com/kubernetes/dns).
```sh
$ cd kubernetes
$ hack/dev-build-and-up.sh
```
### Step One: Create two namespaces
We'll see how cluster DNS works across multiple [namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/), first we need to create two namespaces:
```sh
$ kubectl create -f examples/cluster-dns/namespace-dev.yaml
$ kubectl create -f examples/cluster-dns/namespace-prod.yaml
```
Now list all namespaces:
```sh
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
development name=development Active
production name=production Active
```
For kubectl client to work with each namespace, we define two contexts:
```sh
$ kubectl config set-context dev --namespace=development --cluster=${CLUSTER_NAME} --user=${USER_NAME}
$ kubectl config set-context prod --namespace=production --cluster=${CLUSTER_NAME} --user=${USER_NAME}
```
You can view your cluster name and user name in kubernetes config at ~/.kube/config.
### Step Two: Create backend replication controller in each namespace
Use the file [`examples/cluster-dns/dns-backend-rc.yaml`](dns-backend-rc.yaml) to create a backend server [replication controller](https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/) in each namespace.
```sh
$ kubectl config use-context dev
$ kubectl create -f examples/cluster-dns/dns-backend-rc.yaml
```
Once that's up you can list the pod in the cluster:
```sh
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
dns-backend dns-backend ddysher/dns-backend name=dns-backend 1
```
Now repeat the above commands to create a replication controller in prod namespace:
```sh
$ kubectl config use-context prod
$ kubectl create -f examples/cluster-dns/dns-backend-rc.yaml
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
dns-backend dns-backend ddysher/dns-backend name=dns-backend 1
```
### Step Three: Create backend service
Use the file [`examples/cluster-dns/dns-backend-service.yaml`](dns-backend-service.yaml) to create
a [service](https://kubernetes.io/docs/concepts/services-networking/service/) for the backend server.
```sh
$ kubectl config use-context dev
$ kubectl create -f examples/cluster-dns/dns-backend-service.yaml
```
Once that's up you can list the service in the cluster:
```sh
$ kubectl get service dns-backend
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
dns-backend 10.0.2.3 <none> 8000/TCP name=dns-backend 1d
```
Again, repeat the same process for prod namespace:
```sh
$ kubectl config use-context prod
$ kubectl create -f examples/cluster-dns/dns-backend-service.yaml
$ kubectl get service dns-backend
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
dns-backend 10.0.2.4 <none> 8000/TCP name=dns-backend 1d
```
### Step Four: Create client pod in one namespace
Use the file [`examples/cluster-dns/dns-frontend-pod.yaml`](dns-frontend-pod.yaml) to create a client [pod](https://kubernetes.io/docs/concepts/workloads/pods/pod/) in dev namespace. The client pod will make a connection to backend and exit. Specifically, it tries to connect to address `http://dns-backend.development.cluster.local:8000`.
```sh
$ kubectl config use-context dev
$ kubectl create -f examples/cluster-dns/dns-frontend-pod.yaml
```
Once that's up you can list the pod in the cluster:
```sh
$ kubectl get pods dns-frontend
NAME READY STATUS RESTARTS AGE
dns-frontend 0/1 ExitCode:0 0 1m
```
Wait until the pod succeeds, then we can see the output from the client pod:
```sh
$ kubectl logs dns-frontend
2015-05-07T20:13:54.147664936Z 10.0.236.129
2015-05-07T20:13:54.147721290Z Send request to: http://dns-backend.development.cluster.local:8000
2015-05-07T20:13:54.147733438Z <Response [200]>
2015-05-07T20:13:54.147738295Z Hello World!
```
Please refer to the [source code](images/frontend/client.py) about the log. First line prints out the ip address associated with the service in dev namespace; remaining lines print out our request and server response.
If we switch to prod namespace with the same pod config, we'll see the same result, i.e. dns will resolve across namespace.
```sh
$ kubectl config use-context prod
$ kubectl create -f examples/cluster-dns/dns-frontend-pod.yaml
$ kubectl logs dns-frontend
2015-05-07T20:13:54.147664936Z 10.0.236.129
2015-05-07T20:13:54.147721290Z Send request to: http://dns-backend.development.cluster.local:8000
2015-05-07T20:13:54.147733438Z <Response [200]>
2015-05-07T20:13:54.147738295Z Hello World!
```
#### Note about default namespace
If you prefer not using namespace, then all your services can be addressed using `default` namespace, e.g. `http://dns-backend.default.svc.cluster.local:8000`, or shorthand version `http://dns-backend:8000`
### tl; dr;
For those of you who are impatient, here is the summary of the commands we ran in this tutorial. Remember to set first `$CLUSTER_NAME` and `$USER_NAME` to the values found in `~/.kube/config`.
```sh
# create dev and prod namespaces
kubectl create -f examples/cluster-dns/namespace-dev.yaml
kubectl create -f examples/cluster-dns/namespace-prod.yaml
# create two contexts
kubectl config set-context dev --namespace=development --cluster=${CLUSTER_NAME} --user=${USER_NAME}
kubectl config set-context prod --namespace=production --cluster=${CLUSTER_NAME} --user=${USER_NAME}
# create two backend replication controllers
kubectl config use-context dev
kubectl create -f examples/cluster-dns/dns-backend-rc.yaml
kubectl config use-context prod
kubectl create -f examples/cluster-dns/dns-backend-rc.yaml
# create backend services
kubectl config use-context dev
kubectl create -f examples/cluster-dns/dns-backend-service.yaml
kubectl config use-context prod
kubectl create -f examples/cluster-dns/dns-backend-service.yaml
# create a pod in each namespace and get its output
kubectl config use-context dev
kubectl create -f examples/cluster-dns/dns-frontend-pod.yaml
kubectl logs dns-frontend
kubectl config use-context prod
kubectl create -f examples/cluster-dns/dns-frontend-pod.yaml
kubectl logs dns-frontend
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/cluster-dns/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/cluster-dns/README.md](https://github.com/kubernetes/examples/blob/master/staging/cluster-dns/README.md)

View File

@ -1,125 +1 @@
# CockroachDB on Kubernetes as a StatefulSet
This example deploys [CockroachDB](https://cockroachlabs.com) on Kubernetes as
a StatefulSet. CockroachDB is a distributed, scalable NewSQL database. Please see
[the homepage](https://cockroachlabs.com) and the
[documentation](https://www.cockroachlabs.com/docs/) for details.
## Limitations
### StatefulSet limitations
Standard StatefulSet limitations apply: There is currently no possibility to use
node-local storage (outside of single-node tests), and so there is likely
a performance hit associated with running CockroachDB on some external storage.
Note that CockroachDB already does replication and thus it is unnecessary to
deploy it onto persistent volumes which already replicate internally.
For this reason, high-performance use cases on a private Kubernetes cluster
may want to consider a DaemonSet deployment until Stateful Sets support node-local
storage (see #7562).
### Recovery after persistent storage failure
A persistent storage failure (e.g. losing the hard drive) is gracefully handled
by CockroachDB as long as enough replicas survive (two out of three by
default). Due to the bootstrapping in this deployment, a storage failure of the
first node is special in that the administrator must manually prepopulate the
"new" storage medium by running an instance of CockroachDB with the `--join`
parameter. If this is not done, the first node will bootstrap a new cluster,
which will lead to a lot of trouble.
### Dynamic volume provisioning
The deployment is written for a use case in which dynamic volume provisioning is
available. When that is not the case, the persistent volume claims need
to be created manually. See [minikube.sh](minikube.sh) for the necessary
steps. If you're on GCE or AWS, where dynamic provisioning is supported, no
manual work is needed to create the persistent volumes.
## Testing locally on minikube
Follow the steps in [minikube.sh](minikube.sh) (or simply run that file).
## Testing in the cloud on GCE or AWS
Once you have a Kubernetes cluster running, just run
`kubectl create -f cockroachdb-statefulset.yaml` to create your cockroachdb cluster.
This works because GCE and AWS support dynamic volume provisioning by default,
so persistent volumes will be created for the CockroachDB pods as needed.
## Accessing the database
Along with our StatefulSet configuration, we expose a standard Kubernetes service
that offers a load-balanced virtual IP for clients to access the database
with. In our example, we've called this service `cockroachdb-public`.
Start up a client pod and open up an interactive, (mostly) Postgres-flavor
SQL shell using:
```console
$ kubectl run -it --rm cockroach-client --image=cockroachdb/cockroach --restart=Never --command -- ./cockroach sql --host cockroachdb-public --insecure
```
You can see example SQL statements for inserting and querying data in the
included [demo script](demo.sh), but can use almost any Postgres-style SQL
commands. Some more basic examples can be found within
[CockroachDB's documentation](https://www.cockroachlabs.com/docs/learn-cockroachdb-sql.html).
## Accessing the admin UI
If you want to see information about how the cluster is doing, you can try
pulling up the CockroachDB admin UI by port-forwarding from your local machine
to one of the pods:
```shell
kubectl port-forward cockroachdb-0 8080
```
Once youve done that, you should be able to access the admin UI by visiting
http://localhost:8080/ in your web browser.
## Simulating failures
When all (or enough) nodes are up, simulate a failure like this:
```shell
kubectl exec cockroachdb-0 -- /bin/bash -c "while true; do kill 1; done"
```
You can then reconnect to the database as demonstrated above and verify
that no data was lost. The example runs with three-fold replication, so
it can tolerate one failure of any given node at a time. Note also that
there is a brief period of time immediately after the creation of the
cluster during which the three-fold replication is established, and during
which killing a node may lead to unavailability.
The [demo script](demo.sh) gives an example of killing one instance of the
database and ensuring the other replicas have all data that was written.
## Scaling up or down
Scale the Stateful Set by running
```shell
kubectl scale statefulset cockroachdb --replicas=4
```
Note that you may need to create a new persistent volume claim first. If you
ran `minikube.sh`, there's a spare volume so you can immediately scale up by
one. If you're running on GCE or AWS, you can scale up by as many as you want
because new volumes will automatically be created for you. Convince yourself
that the new node immediately serves reads and writes.
## Cleaning up when you're done
Because all of the resources in this example have been tagged with the label `app=cockroachdb`,
we can clean up everything that we created in one quick command using a selector on that label:
```shell
kubectl delete statefulsets,persistentvolumes,persistentvolumeclaims,services,poddisruptionbudget -l app=cockroachdb
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/cockroachdb/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/cockroachdb/README.md](https://github.com/kubernetes/examples/blob/master/staging/cockroachdb/README.md)

View File

@ -1,163 +1 @@
# Elasticsearch for Kubernetes
Kubernetes makes it trivial for anyone to easily build and scale [Elasticsearch](http://www.elasticsearch.org/) clusters. Here, you'll find how to do so.
Current Elasticsearch version is `1.7.1`.
[A more robust example that follows Elasticsearch best-practices of separating nodes concern is also available](production_cluster/README.md).
<img src="http://kubernetes.io/kubernetes/img/warning.png" alt="WARNING" width="25" height="25"> Current pod descriptors use an `emptyDir` for storing data in each data node container. This is meant to be for the sake of simplicity and [should be adapted according to your storage needs](https://kubernetes.io/docs/design/persistent-storage.md).
## Docker image
The [pre-built image](https://github.com/pires/docker-elasticsearch-kubernetes) used in this example will not be supported. Feel free to fork to fit your own needs, but keep in mind that you will need to change Kubernetes descriptors accordingly.
## Deploy
Let's kickstart our cluster with 1 instance of Elasticsearch.
```
kubectl create -f examples/elasticsearch/service-account.yaml
kubectl create -f examples/elasticsearch/es-svc.yaml
kubectl create -f examples/elasticsearch/es-rc.yaml
```
Let's see if it worked:
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
es-kfymw 1/1 Running 0 7m
kube-dns-p3v1u 3/3 Running 0 19m
```
```
$ kubectl logs es-kfymw
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
[2015-08-30 10:01:31,946][INFO ][node ] [Hammerhead] version[1.7.1], pid[7], build[b88f43f/2015-07-29T09:54:16Z]
[2015-08-30 10:01:31,946][INFO ][node ] [Hammerhead] initializing ...
[2015-08-30 10:01:32,110][INFO ][plugins ] [Hammerhead] loaded [cloud-kubernetes], sites []
[2015-08-30 10:01:32,153][INFO ][env ] [Hammerhead] using [1] data paths, mounts [[/data (/dev/sda9)]], net usable_space [14.4gb], net total_space [15.5gb], types [ext4]
[2015-08-30 10:01:37,188][INFO ][node ] [Hammerhead] initialized
[2015-08-30 10:01:37,189][INFO ][node ] [Hammerhead] starting ...
[2015-08-30 10:01:37,499][INFO ][transport ] [Hammerhead] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.244.48.2:9300]}
[2015-08-30 10:01:37,550][INFO ][discovery ] [Hammerhead] myesdb/n2-6uu_UT3W5XNrjyqBPiA
[2015-08-30 10:01:43,966][INFO ][cluster.service ] [Hammerhead] new_master [Hammerhead][n2-6uu_UT3W5XNrjyqBPiA][es-kfymw][inet[/10.244.48.2:9300]]{master=true}, reason: zen-disco-join (elected_as_master)
[2015-08-30 10:01:44,010][INFO ][http ] [Hammerhead] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/10.244.48.2:9200]}
[2015-08-30 10:01:44,011][INFO ][node ] [Hammerhead] started
[2015-08-30 10:01:44,042][INFO ][gateway ] [Hammerhead] recovered [0] indices into cluster_state
```
So we have a 1-node Elasticsearch cluster ready to handle some work.
## Scale
Scaling is as easy as:
```
kubectl scale --replicas=3 rc es
```
Did it work?
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
es-78e0s 1/1 Running 0 8m
es-kfymw 1/1 Running 0 17m
es-rjmer 1/1 Running 0 8m
kube-dns-p3v1u 3/3 Running 0 30m
```
Let's take a look at logs:
```
$ kubectl logs es-kfymw
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
[2015-08-30 10:01:31,946][INFO ][node ] [Hammerhead] version[1.7.1], pid[7], build[b88f43f/2015-07-29T09:54:16Z]
[2015-08-30 10:01:31,946][INFO ][node ] [Hammerhead] initializing ...
[2015-08-30 10:01:32,110][INFO ][plugins ] [Hammerhead] loaded [cloud-kubernetes], sites []
[2015-08-30 10:01:32,153][INFO ][env ] [Hammerhead] using [1] data paths, mounts [[/data (/dev/sda9)]], net usable_space [14.4gb], net total_space [15.5gb], types [ext4]
[2015-08-30 10:01:37,188][INFO ][node ] [Hammerhead] initialized
[2015-08-30 10:01:37,189][INFO ][node ] [Hammerhead] starting ...
[2015-08-30 10:01:37,499][INFO ][transport ] [Hammerhead] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.244.48.2:9300]}
[2015-08-30 10:01:37,550][INFO ][discovery ] [Hammerhead] myesdb/n2-6uu_UT3W5XNrjyqBPiA
[2015-08-30 10:01:43,966][INFO ][cluster.service ] [Hammerhead] new_master [Hammerhead][n2-6uu_UT3W5XNrjyqBPiA][es-kfymw][inet[/10.244.48.2:9300]]{master=true}, reason: zen-disco-join (elected_as_master)
[2015-08-30 10:01:44,010][INFO ][http ] [Hammerhead] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/10.244.48.2:9200]}
[2015-08-30 10:01:44,011][INFO ][node ] [Hammerhead] started
[2015-08-30 10:01:44,042][INFO ][gateway ] [Hammerhead] recovered [0] indices into cluster_state
[2015-08-30 10:08:02,517][INFO ][cluster.service ] [Hammerhead] added {[Tenpin][2gv5MiwhRiOSsrTOF3DhuA][es-78e0s][inet[/10.244.54.4:9300]]{master=true},}, reason: zen-disco-receive(join from node[[Tenpin][2gv5MiwhRiOSsrTOF3DhuA][es-78e0s][inet[/10.244.54.4:9300]]{master=true}])
[2015-08-30 10:10:10,645][INFO ][cluster.service ] [Hammerhead] added {[Evilhawk][ziTq2PzYRJys43rNL2tbyg][es-rjmer][inet[/10.244.33.3:9300]]{master=true},}, reason: zen-disco-receive(join from node[[Evilhawk][ziTq2PzYRJys43rNL2tbyg][es-rjmer][inet[/10.244.33.3:9300]]{master=true}])
```
So we have a 3-node Elasticsearch cluster ready to handle more work.
## Access the service
*Don't forget* that services in Kubernetes are only accessible from containers in the cluster. For different behavior you should [configure the creation of an external load-balancer](http://kubernetes.io/v1.0/docs/user-guide/services.html#type-loadbalancer). While it's supported within this example service descriptor, its usage is out of scope of this document, for now.
```
$ kubectl get service elasticsearch
NAME LABELS SELECTOR IP(S) PORT(S)
elasticsearch component=elasticsearch component=elasticsearch 10.100.108.94 9200/TCP
9300/TCP
```
From any host on your cluster (that's running `kube-proxy`), run:
```
$ curl 10.100.108.94:9200
```
You should see something similar to the following:
```json
{
"status" : 200,
"name" : "Hammerhead",
"cluster_name" : "myesdb",
"version" : {
"number" : "1.7.1",
"build_hash" : "b88f43fc40b0bcd7f173a1f9ee2e97816de80b19",
"build_timestamp" : "2015-07-29T09:54:16Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
```
Or if you want to check cluster information:
```
curl 10.100.108.94:9200/_cluster/health?pretty
```
You should see something similar to the following:
```json
{
"cluster_name" : "myesdb",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0
}
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/elasticsearch/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/elasticsearch/README.md](https://github.com/kubernetes/examples/blob/master/staging/elasticsearch/README.md)

View File

@ -1,189 +1 @@
# Elasticsearch for Kubernetes
Kubernetes makes it trivial for anyone to easily build and scale [Elasticsearch](http://www.elasticsearch.org/) clusters. Here, you'll find how to do so.
Current Elasticsearch version is `1.7.1`.
Before we start, one needs to know that Elasticsearch best-practices recommend to separate nodes in three roles:
* `Master` nodes - intended for clustering management only, no data, no HTTP API
* `Client` nodes - intended for client usage, no data, with HTTP API
* `Data` nodes - intended for storing and indexing your data, no HTTP API
This is enforced throughout this document.
<img src="http://kubernetes.io/kubernetes/img/warning.png" alt="WARNING" width="25" height="25"> Current pod descriptors use an `emptyDir` for storing data in each data node container. This is meant to be for the sake of simplicity and [should be adapted according to your storage needs](https://kubernetes.io/docs/design/persistent-storage.md).
## Docker image
This example uses [this pre-built image](https://github.com/pires/docker-elasticsearch-kubernetes). Feel free to fork and update it to fit your own needs, but keep in mind that you will need to change Kubernetes descriptors accordingly.
## Deploy
```
kubectl create -f examples/elasticsearch/production_cluster/service-account.yaml
kubectl create -f examples/elasticsearch/production_cluster/es-discovery-svc.yaml
kubectl create -f examples/elasticsearch/production_cluster/es-svc.yaml
kubectl create -f examples/elasticsearch/production_cluster/es-master-rc.yaml
```
Wait until `es-master` is provisioned, and
```
kubectl create -f examples/elasticsearch/production_cluster/es-client-rc.yaml
```
Wait until `es-client` is provisioned, and
```
kubectl create -f examples/elasticsearch/production_cluster/es-data-rc.yaml
```
Wait until `es-data` is provisioned.
Now, I leave up to you how to validate the cluster, but a first step is to wait for containers to be in ```RUNNING``` state and check the Elasticsearch master logs:
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
es-client-2ep9o 1/1 Running 0 2m
es-data-r9tgv 1/1 Running 0 1m
es-master-vxl6c 1/1 Running 0 6m
```
```
$ kubectl logs es-master-vxl6c
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
[2015-08-21 10:58:51,324][INFO ][node ] [Arc] version[1.7.1], pid[8], build[b88f43f/2015-07-29T09:54:16Z]
[2015-08-21 10:58:51,328][INFO ][node ] [Arc] initializing ...
[2015-08-21 10:58:51,542][INFO ][plugins ] [Arc] loaded [cloud-kubernetes], sites []
[2015-08-21 10:58:51,624][INFO ][env ] [Arc] using [1] data paths, mounts [[/data (/dev/sda9)]], net usable_space [14.4gb], net total_space [15.5gb], types [ext4]
[2015-08-21 10:58:57,439][INFO ][node ] [Arc] initialized
[2015-08-21 10:58:57,439][INFO ][node ] [Arc] starting ...
[2015-08-21 10:58:57,782][INFO ][transport ] [Arc] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.244.15.2:9300]}
[2015-08-21 10:58:57,847][INFO ][discovery ] [Arc] myesdb/-x16XFUzTCC8xYqWoeEOYQ
[2015-08-21 10:59:05,167][INFO ][cluster.service ] [Arc] new_master [Arc][-x16XFUzTCC8xYqWoeEOYQ][es-master-vxl6c][inet[/10.244.15.2:9300]]{data=false, master=true}, reason: zen-disco-join (elected_as_master)
[2015-08-21 10:59:05,202][INFO ][node ] [Arc] started
[2015-08-21 10:59:05,238][INFO ][gateway ] [Arc] recovered [0] indices into cluster_state
[2015-08-21 11:02:28,797][INFO ][cluster.service ] [Arc] added {[Gideon][4EfhWSqaTqikbK4tI7bODA][es-data-r9tgv][inet[/10.244.59.4:9300]]{master=false},}, reason: zen-disco-receive(join from node[[Gideon][4EfhWSqaTqikbK4tI7bODA][es-data-r9tgv][inet[/10.244.59.4:9300]]{master=false}])
[2015-08-21 11:03:16,822][INFO ][cluster.service ] [Arc] added {[Venomm][tFYxwgqGSpOejHLG4umRqg][es-client-2ep9o][inet[/10.244.53.2:9300]]{data=false, master=false},}, reason: zen-disco-receive(join from node[[Venomm][tFYxwgqGSpOejHLG4umRqg][es-client-2ep9o][inet[/10.244.53.2:9300]]{data=false, master=false}])
```
As you can assert, the cluster is up and running. Easy, wasn't it?
## Scale
Scaling each type of node to handle your cluster is as easy as:
```
kubectl scale --replicas=3 rc es-master
kubectl scale --replicas=2 rc es-client
kubectl scale --replicas=2 rc es-data
```
Did it work?
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
es-client-2ep9o 1/1 Running 0 4m
es-client-ye5s1 1/1 Running 0 50s
es-data-8az22 1/1 Running 0 47s
es-data-r9tgv 1/1 Running 0 3m
es-master-57h7k 1/1 Running 0 52s
es-master-kuwse 1/1 Running 0 52s
es-master-vxl6c 1/1 Running 0 8m
```
Let's take another look of the Elasticsearch master logs:
```
$ kubectl logs es-master-vxl6c
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
[2015-08-21 10:58:51,324][INFO ][node ] [Arc] version[1.7.1], pid[8], build[b88f43f/2015-07-29T09:54:16Z]
[2015-08-21 10:58:51,328][INFO ][node ] [Arc] initializing ...
[2015-08-21 10:58:51,542][INFO ][plugins ] [Arc] loaded [cloud-kubernetes], sites []
[2015-08-21 10:58:51,624][INFO ][env ] [Arc] using [1] data paths, mounts [[/data (/dev/sda9)]], net usable_space [14.4gb], net total_space [15.5gb], types [ext4]
[2015-08-21 10:58:57,439][INFO ][node ] [Arc] initialized
[2015-08-21 10:58:57,439][INFO ][node ] [Arc] starting ...
[2015-08-21 10:58:57,782][INFO ][transport ] [Arc] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.244.15.2:9300]}
[2015-08-21 10:58:57,847][INFO ][discovery ] [Arc] myesdb/-x16XFUzTCC8xYqWoeEOYQ
[2015-08-21 10:59:05,167][INFO ][cluster.service ] [Arc] new_master [Arc][-x16XFUzTCC8xYqWoeEOYQ][es-master-vxl6c][inet[/10.244.15.2:9300]]{data=false, master=true}, reason: zen-disco-join (elected_as_master)
[2015-08-21 10:59:05,202][INFO ][node ] [Arc] started
[2015-08-21 10:59:05,238][INFO ][gateway ] [Arc] recovered [0] indices into cluster_state
[2015-08-21 11:02:28,797][INFO ][cluster.service ] [Arc] added {[Gideon][4EfhWSqaTqikbK4tI7bODA][es-data-r9tgv][inet[/10.244.59.4:9300]]{master=false},}, reason: zen-disco-receive(join from node[[Gideon][4EfhWSqaTqikbK4tI7bODA][es-data-r9tgv][inet[/10.244.59.4:9300]]{master=false}])
[2015-08-21 11:03:16,822][INFO ][cluster.service ] [Arc] added {[Venomm][tFYxwgqGSpOejHLG4umRqg][es-client-2ep9o][inet[/10.244.53.2:9300]]{data=false, master=false},}, reason: zen-disco-receive(join from node[[Venomm][tFYxwgqGSpOejHLG4umRqg][es-client-2ep9o][inet[/10.244.53.2:9300]]{data=false, master=false}])
[2015-08-21 11:04:40,781][INFO ][cluster.service ] [Arc] added {[Erik Josten][QUJlahfLTi-MsxzM6_Da0g][es-master-kuwse][inet[/10.244.59.5:9300]]{data=false, master=true},}, reason: zen-disco-receive(join from node[[Erik Josten][QUJlahfLTi-MsxzM6_Da0g][es-master-kuwse][inet[/10.244.59.5:9300]]{data=false, master=true}])
[2015-08-21 11:04:41,076][INFO ][cluster.service ] [Arc] added {[Power Princess][V4qnR-6jQOS5ovXQsPgo7g][es-master-57h7k][inet[/10.244.53.3:9300]]{data=false, master=true},}, reason: zen-disco-receive(join from node[[Power Princess][V4qnR-6jQOS5ovXQsPgo7g][es-master-57h7k][inet[/10.244.53.3:9300]]{data=false, master=true}])
[2015-08-21 11:04:53,966][INFO ][cluster.service ] [Arc] added {[Cagliostro][Wpfx5fkBRiG2qCEWd8laaQ][es-client-ye5s1][inet[/10.244.15.3:9300]]{data=false, master=false},}, reason: zen-disco-receive(join from node[[Cagliostro][Wpfx5fkBRiG2qCEWd8laaQ][es-client-ye5s1][inet[/10.244.15.3:9300]]{data=false, master=false}])
[2015-08-21 11:04:56,803][INFO ][cluster.service ] [Arc] added {[Thog][vkdEtX3ESfWmhXXf-Wi0_Q][es-data-8az22][inet[/10.244.15.4:9300]]{master=false},}, reason: zen-disco-receive(join from node[[Thog][vkdEtX3ESfWmhXXf-Wi0_Q][es-data-8az22][inet[/10.244.15.4:9300]]{master=false}])
```
## Access the service
*Don't forget* that services in Kubernetes are only accessible from containers in the cluster. For different behavior you should [configure the creation of an external load-balancer](http://kubernetes.io/v1.0/docs/user-guide/services.html#type-loadbalancer). While it's supported within this example service descriptor, its usage is out of scope of this document, for now.
```
$ kubectl get service elasticsearch
NAME LABELS SELECTOR IP(S) PORT(S)
elasticsearch component=elasticsearch,role=client component=elasticsearch,role=client 10.100.134.2 9200/TCP
```
From any host on your cluster (that's running `kube-proxy`), run:
```
curl http://10.100.134.2:9200
```
You should see something similar to the following:
```json
{
"status" : 200,
"name" : "Cagliostro",
"cluster_name" : "myesdb",
"version" : {
"number" : "1.7.1",
"build_hash" : "b88f43fc40b0bcd7f173a1f9ee2e97816de80b19",
"build_timestamp" : "2015-07-29T09:54:16Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
```
Or if you want to check cluster information:
```
curl http://10.100.134.2:9200/_cluster/health?pretty
```
You should see something similar to the following:
```json
{
"cluster_name" : "myesdb",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 7,
"number_of_data_nodes" : 2,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0
}
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/elasticsearch/production_cluster/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/elasticsearch/production_cluster/README.md](https://github.com/kubernetes/examples/blob/master/staging/elasticsearch/production_cluster/README.md)

View File

@ -1,133 +1 @@
### explorer
Explorer is a little container for examining the runtime environment Kubernetes produces for your pods.
The intended use is to substitute gcr.io/google_containers/explorer for your intended container, and then visit it via the proxy.
Currently, you can look at:
* The environment variables to make sure Kubernetes is doing what you expect.
* The filesystem to make sure the mounted volumes and files are also what you expect.
* Perform DNS lookups, to see how DNS works.
`pod.yaml` is supplied as an example. You can control the port it serves on with the -port flag.
Example from command line (the DNS lookup looks better from a web browser):
```console
$ kubectl create -f examples/explorer/pod.yaml
$ kubectl proxy &
Starting to serve on localhost:8001
$ curl localhost:8001/api/v1/proxy/namespaces/default/pods/explorer:8080/vars/
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=explorer
KIBANA_LOGGING_PORT_5601_TCP_PORT=5601
KUBERNETES_SERVICE_HOST=10.0.0.2
MONITORING_GRAFANA_PORT_80_TCP_PROTO=tcp
MONITORING_INFLUXDB_UI_PORT_80_TCP_PROTO=tcp
KIBANA_LOGGING_SERVICE_PORT=5601
MONITORING_HEAPSTER_PORT_80_TCP_PORT=80
MONITORING_INFLUXDB_UI_PORT_80_TCP_PORT=80
KIBANA_LOGGING_SERVICE_HOST=10.0.204.206
KIBANA_LOGGING_PORT_5601_TCP=tcp://10.0.204.206:5601
KUBERNETES_PORT=tcp://10.0.0.2:443
MONITORING_INFLUXDB_PORT=tcp://10.0.2.30:80
MONITORING_INFLUXDB_PORT_80_TCP_PROTO=tcp
MONITORING_INFLUXDB_UI_PORT=tcp://10.0.36.78:80
KUBE_DNS_PORT_53_UDP=udp://10.0.0.10:53
MONITORING_INFLUXDB_SERVICE_HOST=10.0.2.30
ELASTICSEARCH_LOGGING_PORT=tcp://10.0.48.200:9200
ELASTICSEARCH_LOGGING_PORT_9200_TCP_PORT=9200
KUBERNETES_PORT_443_TCP=tcp://10.0.0.2:443
ELASTICSEARCH_LOGGING_PORT_9200_TCP_PROTO=tcp
KIBANA_LOGGING_PORT_5601_TCP_ADDR=10.0.204.206
KUBE_DNS_PORT_53_UDP_ADDR=10.0.0.10
MONITORING_HEAPSTER_PORT_80_TCP_PROTO=tcp
MONITORING_INFLUXDB_PORT_80_TCP_ADDR=10.0.2.30
KIBANA_LOGGING_PORT=tcp://10.0.204.206:5601
MONITORING_GRAFANA_SERVICE_PORT=80
MONITORING_HEAPSTER_SERVICE_PORT=80
MONITORING_HEAPSTER_PORT_80_TCP=tcp://10.0.150.238:80
ELASTICSEARCH_LOGGING_PORT_9200_TCP=tcp://10.0.48.200:9200
ELASTICSEARCH_LOGGING_PORT_9200_TCP_ADDR=10.0.48.200
MONITORING_GRAFANA_PORT_80_TCP_PORT=80
MONITORING_HEAPSTER_PORT=tcp://10.0.150.238:80
MONITORING_INFLUXDB_PORT_80_TCP=tcp://10.0.2.30:80
KUBE_DNS_SERVICE_PORT=53
KUBE_DNS_PORT_53_UDP_PORT=53
MONITORING_GRAFANA_PORT_80_TCP_ADDR=10.0.100.174
MONITORING_INFLUXDB_UI_SERVICE_HOST=10.0.36.78
KIBANA_LOGGING_PORT_5601_TCP_PROTO=tcp
MONITORING_GRAFANA_PORT=tcp://10.0.100.174:80
MONITORING_INFLUXDB_UI_PORT_80_TCP_ADDR=10.0.36.78
KUBE_DNS_SERVICE_HOST=10.0.0.10
KUBERNETES_PORT_443_TCP_PORT=443
MONITORING_HEAPSTER_PORT_80_TCP_ADDR=10.0.150.238
MONITORING_INFLUXDB_UI_SERVICE_PORT=80
KUBE_DNS_PORT=udp://10.0.0.10:53
ELASTICSEARCH_LOGGING_SERVICE_HOST=10.0.48.200
KUBERNETES_SERVICE_PORT=443
MONITORING_HEAPSTER_SERVICE_HOST=10.0.150.238
MONITORING_INFLUXDB_SERVICE_PORT=80
MONITORING_INFLUXDB_PORT_80_TCP_PORT=80
KUBE_DNS_PORT_53_UDP_PROTO=udp
MONITORING_GRAFANA_PORT_80_TCP=tcp://10.0.100.174:80
ELASTICSEARCH_LOGGING_SERVICE_PORT=9200
MONITORING_GRAFANA_SERVICE_HOST=10.0.100.174
MONITORING_INFLUXDB_UI_PORT_80_TCP=tcp://10.0.36.78:80
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.0.0.2
HOME=/
$ curl localhost:8001/api/v1/proxy/namespaces/default/pods/explorer:8080/fs/
mount/
var/
.dockerenv
etc/
dev/
proc/
.dockerinit
sys/
README.md
explorer
$ curl localhost:8001/api/v1/proxy/namespaces/default/pods/explorer:8080/dns?q=elasticsearch-logging
<html><head></head><body>
<form action="/api/v1/proxy/namespaces/default/pods/explorer:8080/dns">
<input name="q" type="text" value="elasticsearch-logging"/>
<button type="submit">Lookup</button>
</form>
<br/><br/><pre>LookupNS(elasticsearch-logging):
Result: ([]*net.NS)<nil>
Error: &lt;*&gt;lookup elasticsearch-logging: no such host
LookupTXT(elasticsearch-logging):
Result: ([]string)<nil>
Error: &lt;*&gt;lookup elasticsearch-logging: no such host
LookupSRV(&#34;&#34;, &#34;&#34;, elasticsearch-logging):
cname: elasticsearch-logging.default.svc.cluster.local.
Result: ([]*net.SRV)[&lt;*&gt;{Target:(string)elasticsearch-logging.default.svc.cluster.local. Port:(uint16)9200 Priority:(uint16)10 Weight:(uint16)100}]
Error: <nil>
LookupHost(elasticsearch-logging):
Result: ([]string)[10.0.60.245]
Error: <nil>
LookupIP(elasticsearch-logging):
Result: ([]net.IP)[10.0.60.245]
Error: <nil>
LookupMX(elasticsearch-logging):
Result: ([]*net.MX)<nil>
Error: &lt;*&gt;lookup elasticsearch-logging: no such host
</nil></nil></nil></nil></nil></nil></pre>
</body></html>
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/explorer/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/explorer/README.md](https://github.com/kubernetes/examples/blob/master/staging/explorer/README.md)

View File

@ -1,271 +1 @@
## Guestbook Example
This example shows how to build a simple multi-tier web application using Kubernetes and Docker. The application consists of a web front-end, Redis master for storage, and replicated set of Redis slaves, all for which we will create Kubernetes replication controllers, pods, and services.
If you are running a cluster in Google Container Engine (GKE), instead see the [Guestbook Example for Google Container Engine](https://cloud.google.com/container-engine/docs/tutorials/guestbook).
##### Table of Contents
* [Step Zero: Prerequisites](#step-zero)
* [Step One: Create the Redis master pod](#step-one)
* [Step Two: Create the Redis master service](#step-two)
* [Step Three: Create the Redis slave pods](#step-three)
* [Step Four: Create the Redis slave service](#step-four)
* [Step Five: Create the guestbook pods](#step-five)
* [Step Six: Create the guestbook service](#step-six)
* [Step Seven: View the guestbook](#step-seven)
* [Step Eight: Cleanup](#step-eight)
### Step Zero: Prerequisites <a id="step-zero"></a>
This example assumes that you have a working cluster. See the [Getting Started Guides](https://kubernetes.io/docs/getting-started-guides/) for details about creating a cluster.
**Tip:** View all the `kubectl` commands, including their options and descriptions in the [kubectl CLI reference](https://kubernetes.io/docs/user-guide/kubectl/kubectl.md).
### Step One: Create the Redis master pod<a id="step-one"></a>
Use the `examples/guestbook-go/redis-master-controller.json` file to create a [replication controller](https://kubernetes.io/docs/user-guide/replication-controller.md) and Redis master [pod](https://kubernetes.io/docs/user-guide/pods.md). The pod runs a Redis key-value server in a container. Using a replication controller is the preferred way to launch long-running pods, even for 1 replica, so that the pod benefits from the self-healing mechanism in Kubernetes (keeps the pods alive).
1. Use the [redis-master-controller.json](redis-master-controller.json) file to create the Redis master replication controller in your Kubernetes cluster by running the `kubectl create -f` *`filename`* command:
```console
$ kubectl create -f examples/guestbook-go/redis-master-controller.json
replicationcontrollers/redis-master
```
2. To verify that the redis-master controller is up, list the replication controllers you created in the cluster with the `kubectl get rc` command(if you don't specify a `--namespace`, the `default` namespace will be used. The same below):
```console
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
redis-master redis-master gurpartap/redis app=redis,role=master 1
...
```
Result: The replication controller then creates the single Redis master pod.
3. To verify that the redis-master pod is running, list the pods you created in cluster with the `kubectl get pods` command:
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-master-xx4uv 1/1 Running 0 1m
...
```
Result: You'll see a single Redis master pod and the machine where the pod is running after the pod gets placed (may take up to thirty seconds).
4. To verify what containers are running in the redis-master pod, you can SSH to that machine with `gcloud compute ssh --zone` *`zone_name`* *`host_name`* and then run `docker ps`:
```console
me@workstation$ gcloud compute ssh --zone us-central1-b kubernetes-node-bz1p
me@kubernetes-node-3:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
d5c458dabe50 redis "/entrypoint.sh redis" 5 minutes ago Up 5 minutes
```
Note: The initial `docker pull` can take a few minutes, depending on network conditions.
### Step Two: Create the Redis master service <a id="step-two"></a>
A Kubernetes [service](https://kubernetes.io/docs/user-guide/services.md) is a named load balancer that proxies traffic to one or more pods. The services in a Kubernetes cluster are discoverable inside other pods via environment variables or DNS.
Services find the pods to load balance based on pod labels. The pod that you created in Step One has the label `app=redis` and `role=master`. The selector field of the service determines which pods will receive the traffic sent to the service.
1. Use the [redis-master-service.json](redis-master-service.json) file to create the service in your Kubernetes cluster by running the `kubectl create -f` *`filename`* command:
```console
$ kubectl create -f examples/guestbook-go/redis-master-service.json
services/redis-master
```
2. To verify that the redis-master service is up, list the services you created in the cluster with the `kubectl get services` command:
```console
$ kubectl get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
redis-master 10.0.136.3 <none> 6379/TCP app=redis,role=master 1h
...
```
Result: All new pods will see the `redis-master` service running on the host (`$REDIS_MASTER_SERVICE_HOST` environment variable) at port 6379, or running on `redis-master:6379`. After the service is created, the service proxy on each node is configured to set up a proxy on the specified port (in our example, that's port 6379).
### Step Three: Create the Redis slave pods <a id="step-three"></a>
The Redis master we created earlier is a single pod (REPLICAS = 1), while the Redis read slaves we are creating here are 'replicated' pods. In Kubernetes, a replication controller is responsible for managing the multiple instances of a replicated pod.
1. Use the file [redis-slave-controller.json](redis-slave-controller.json) to create the replication controller by running the `kubectl create -f` *`filename`* command:
```console
$ kubectl create -f examples/guestbook-go/redis-slave-controller.json
replicationcontrollers/redis-slave
```
2. To verify that the redis-slave controller is running, run the `kubectl get rc` command:
```console
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
redis-master redis-master redis app=redis,role=master 1
redis-slave redis-slave kubernetes/redis-slave:v2 app=redis,role=slave 2
...
```
Result: The replication controller creates and configures the Redis slave pods through the redis-master service (name:port pair, in our example that's `redis-master:6379`).
Example:
The Redis slaves get started by the replication controller with the following command:
```console
redis-server --slaveof redis-master 6379
```
3. To verify that the Redis master and slaves pods are running, run the `kubectl get pods` command:
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-master-xx4uv 1/1 Running 0 18m
redis-slave-b6wj4 1/1 Running 0 1m
redis-slave-iai40 1/1 Running 0 1m
...
```
Result: You see the single Redis master and two Redis slave pods.
### Step Four: Create the Redis slave service <a id="step-four"></a>
Just like the master, we want to have a service to proxy connections to the read slaves. In this case, in addition to discovery, the Redis slave service provides transparent load balancing to clients.
1. Use the [redis-slave-service.json](redis-slave-service.json) file to create the Redis slave service by running the `kubectl create -f` *`filename`* command:
```console
$ kubectl create -f examples/guestbook-go/redis-slave-service.json
services/redis-slave
```
2. To verify that the redis-slave service is up, list the services you created in the cluster with the `kubectl get services` command:
```console
$ kubectl get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
redis-master 10.0.136.3 <none> 6379/TCP app=redis,role=master 1h
redis-slave 10.0.21.92 <none> 6379/TCP app-redis,role=slave 1h
...
```
Result: The service is created with labels `app=redis` and `role=slave` to identify that the pods are running the Redis slaves.
Tip: It is helpful to set labels on your services themselves--as we've done here--to make it easy to locate them later.
### Step Five: Create the guestbook pods <a id="step-five"></a>
This is a simple Go `net/http` ([negroni](https://github.com/codegangsta/negroni) based) server that is configured to talk to either the slave or master services depending on whether the request is a read or a write. The pods we are creating expose a simple JSON interface and serves a jQuery-Ajax based UI. Like the Redis read slaves, these pods are also managed by a replication controller.
1. Use the [guestbook-controller.json](guestbook-controller.json) file to create the guestbook replication controller by running the `kubectl create -f` *`filename`* command:
```console
$ kubectl create -f examples/guestbook-go/guestbook-controller.json
replicationcontrollers/guestbook
```
Tip: If you want to modify the guestbook code open the `_src` of this example and read the README.md and the Makefile. If you have pushed your custom image be sure to update the `image` accordingly in the guestbook-controller.json.
2. To verify that the guestbook replication controller is running, run the `kubectl get rc` command:
```console
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
guestbook guestbook gcr.io/google_containers/guestbook:v3 app=guestbook 3
redis-master redis-master redis app=redis,role=master 1
redis-slave redis-slave kubernetes/redis-slave:v2 app=redis,role=slave 2
...
```
3. To verify that the guestbook pods are running (it might take up to thirty seconds to create the pods), list the pods you created in cluster with the `kubectl get pods` command:
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
guestbook-3crgn 1/1 Running 0 2m
guestbook-gv7i6 1/1 Running 0 2m
guestbook-x405a 1/1 Running 0 2m
redis-master-xx4uv 1/1 Running 0 23m
redis-slave-b6wj4 1/1 Running 0 6m
redis-slave-iai40 1/1 Running 0 6m
...
```
Result: You see a single Redis master, two Redis slaves, and three guestbook pods.
### Step Six: Create the guestbook service <a id="step-six"></a>
Just like the others, we create a service to group the guestbook pods but this time, to make the guestbook front-end externally visible, we specify `"type": "LoadBalancer"`.
1. Use the [guestbook-service.json](guestbook-service.json) file to create the guestbook service by running the `kubectl create -f` *`filename`* command:
```console
$ kubectl create -f examples/guestbook-go/guestbook-service.json
```
2. To verify that the guestbook service is up, list the services you created in the cluster with the `kubectl get services` command:
```console
$ kubectl get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
guestbook 10.0.217.218 146.148.81.8 3000/TCP app=guestbook 1h
redis-master 10.0.136.3 <none> 6379/TCP app=redis,role=master 1h
redis-slave 10.0.21.92 <none> 6379/TCP app-redis,role=slave 1h
...
```
Result: The service is created with label `app=guestbook`.
### Step Seven: View the guestbook <a id="step-seven"></a>
You can now play with the guestbook that you just created by opening it in a browser (it might take a few moments for the guestbook to come up).
* **Local Host:**
If you are running Kubernetes locally, to view the guestbook, navigate to `http://localhost:3000` in your browser.
* **Remote Host:**
1. To view the guestbook on a remote host, locate the external IP of the load balancer in the **IP** column of the `kubectl get services` output. In our example, the internal IP address is `10.0.217.218` and the external IP address is `146.148.81.8` (*Note: you might need to scroll to see the IP column*).
2. Append port `3000` to the IP address (for example `http://146.148.81.8:3000`), and then navigate to that address in your browser.
Result: The guestbook displays in your browser:
![Guestbook](guestbook-page.png)
**Further Reading:**
If you're using Google Compute Engine, see the details about limiting traffic to specific sources at [Google Compute Engine firewall documentation][gce-firewall-docs].
[cloud-console]: https://console.developer.google.com
[gce-firewall-docs]: https://cloud.google.com/compute/docs/networking#firewalls
### Step Eight: Cleanup <a id="step-eight"></a>
After you're done playing with the guestbook, you can cleanup by deleting the guestbook service and removing the associated resources that were created, including load balancers, forwarding rules, target pools, and Kubernetes replication controllers and services.
Delete all the resources by running the following `kubectl delete -f` *`filename`* command:
```console
$ kubectl delete -f examples/guestbook-go
guestbook-controller
guestbook
redid-master-controller
redis-master
redis-slave-controller
redis-slave
```
Tip: To turn down your Kubernetes cluster, follow the corresponding instructions in the version of the
[Getting Started Guides](https://kubernetes.io/docs/getting-started-guides/) that you previously used to create your cluster.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/guestbook-go/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/guestbook-go/README.md](https://github.com/kubernetes/examples/blob/master/guestbook-go/README.md)

View File

@ -1,702 +1 @@
## Guestbook Example
This example shows how to build a simple, multi-tier web application using Kubernetes and [Docker](https://www.docker.com/).
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Guestbook Example](#guestbook-example)
- [Prerequisites](#prerequisites)
- [Quick Start](#quick-start)
- [Step One: Start up the redis master](#step-one-start-up-the-redis-master)
- [Define a Deployment](#define-a-deployment)
- [Define a Service](#define-a-service)
- [Create a Service](#create-a-service)
- [Finding a Service](#finding-a-service)
- [Environment variables](#environment-variables)
- [DNS service](#dns-service)
- [Create a Deployment](#create-a-deployment)
- [Optional Interlude](#optional-interlude)
- [Step Two: Start up the redis slave](#step-two-start-up-the-redis-slave)
- [Step Three: Start up the guestbook frontend](#step-three-start-up-the-guestbook-frontend)
- [Using 'type: LoadBalancer' for the frontend service (cloud-provider-specific)](#using-type-loadbalancer-for-the-frontend-service-cloud-provider-specific)
- [Step Four: Cleanup](#step-four-cleanup)
- [Troubleshooting](#troubleshooting)
- [Appendix: Accessing the guestbook site externally](#appendix-accessing-the-guestbook-site-externally)
- [Google Compute Engine External Load Balancer Specifics](#google-compute-engine-external-load-balancer-specifics)
<!-- END MUNGE: GENERATED_TOC -->
The example consists of:
- A web frontend
- A [redis](http://redis.io/) master (for storage), and a replicated set of redis 'slaves'.
The web frontend interacts with the redis master via javascript redis API calls.
**Note**: If you are running this example on a [Google Container Engine](https://cloud.google.com/container-engine/) installation, see [this Google Container Engine guestbook walkthrough](https://cloud.google.com/container-engine/docs/tutorials/guestbook) instead. The basic concepts are the same, but the walkthrough is tailored to a Container Engine setup.
### Prerequisites
This example requires a running Kubernetes cluster. First, check that kubectl is properly configured by getting the cluster state:
```console
$ kubectl cluster-info
```
If you see a url response, you are ready to go. If not, read the [Getting Started guides](http://kubernetes.io/docs/getting-started-guides/) for how to get started, and follow the [prerequisites](http://kubernetes.io/docs/user-guide/prereqs/) to install and configure `kubectl`. As noted above, if you have a Google Container Engine cluster set up, read [this example](https://cloud.google.com/container-engine/docs/tutorials/guestbook) instead.
All the files referenced in this example can be downloaded in [current folder](./).
### Quick Start
This section shows the simplest way to get the example work. If you want to know the details, you should skip this and read [the rest of the example](#step-one-start-up-the-redis-master).
Start the guestbook with one command:
```console
$ kubectl create -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml
service "redis-master" created
deployment "redis-master" created
service "redis-slave" created
deployment "redis-slave" created
service "frontend" created
deployment "frontend" created
```
Alternatively, you can start the guestbook by running:
```console
$ kubectl create -f examples/guestbook/
```
Then, list all your Services:
```console
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend 10.0.0.117 <none> 80/TCP 20s
redis-master 10.0.0.170 <none> 6379/TCP 20s
redis-slave 10.0.0.201 <none> 6379/TCP 20s
```
Now you can access the guestbook on each node with frontend Service's `<Cluster-IP>:<PORT>`, e.g. `10.0.0.117:80` in this guide. `<Cluster-IP>` is a cluster-internal IP. If you want to access the guestbook from outside of the cluster, add `type: NodePort` to the frontend Service `spec` field. Then you can access the guestbook with `<NodeIP>:NodePort` from outside of the cluster. On cloud providers which support external load balancers, adding `type: LoadBalancer` to the frontend Service `spec` field will provision a load balancer for your Service. There are several ways for you to access the guestbook. You may learn from [Accessing services running on the cluster](https://kubernetes.io/docs/concepts/cluster-administration/access-cluster/#accessing-services-running-on-the-cluster).
Clean up the guestbook:
```console
$ kubectl delete -f examples/guestbook/all-in-one/guestbook-all-in-one.yaml
```
or
```console
$ kubectl delete -f examples/guestbook/
```
### Step One: Start up the redis master
Before continuing to the gory details, we also recommend you to read Kubernetes [concepts and user guide](http://kubernetes.io/docs/user-guide/).
**Note**: The redis master in this example is *not* highly available. Making it highly available would be an interesting, but intricate exercise — redis doesn't actually support multi-master Deployments at this point in time, so high availability would be a somewhat tricky thing to implement, and might involve periodic serialization to disk, and so on.
#### Define a Deployment
To start the redis master, use the file [redis-master-deployment.yaml](redis-master-deployment.yaml), which describes a single [pod](http://kubernetes.io/docs/user-guide/pods/) running a redis key-value server in a container.
Although we have a single instance of our redis master, we are using a [Deployment](http://kubernetes.io/docs/user-guide/deployments/) to enforce that exactly one pod keeps running. E.g., if the node were to go down, the Deployment will ensure that the redis master gets restarted on a healthy node. (In our simplified example, this could result in data loss.)
The file [redis-master-deployment.yaml](redis-master-deployment.yaml) defines the redis master Deployment:
<!-- BEGIN MUNGE: EXAMPLE redis-master-deployment.yaml -->
```yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis-master
# these labels can be applied automatically
# from the labels in the pod template if not set
# labels:
# app: redis
# role: master
# tier: backend
spec:
# this replicas value is default
# modify it according to your case
replicas: 1
# selector can be applied automatically
# from the labels in the pod template if not set
# selector:
# matchLabels:
# app: guestbook
# role: master
# tier: backend
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: gcr.io/google_containers/redis:e2e
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
```
[Download example](redis-master-deployment.yaml?raw=true)
<!-- END MUNGE: EXAMPLE redis-master-deployment.yaml -->
#### Define a Service
A Kubernetes [Service](http://kubernetes.io/docs/user-guide/services/) is a named load balancer that proxies traffic to one or more containers. This is done using the [labels](http://kubernetes.io/docs/user-guide/labels/) metadata that we defined in the `redis-master` pod above. As mentioned, we have only one redis master, but we nevertheless want to create a Service for it. Why? Because it gives us a deterministic way to route to the single master using an elastic IP.
Services find the pods to load balance based on the pods' labels.
The selector field of the Service description determines which pods will receive the traffic sent to the Service, and the `port` and `targetPort` information defines what port the Service proxy will run at.
The file [redis-master-service.yaml](redis-master-deployment.yaml) defines the redis master Service:
<!-- BEGIN MUNGE: EXAMPLE redis-master-service.yaml -->
```yaml
apiVersion: v1
kind: Service
metadata:
name: redis-master
labels:
app: redis
role: master
tier: backend
spec:
ports:
# the port that this service should serve on
- port: 6379
targetPort: 6379
selector:
app: redis
role: master
tier: backend
```
[Download example](redis-master-service.yaml?raw=true)
<!-- END MUNGE: EXAMPLE redis-master-service.yaml -->
#### Create a Service
According to the [config best practices](http://kubernetes.io/docs/user-guide/config-best-practices/), create a Service before corresponding Deployments so that the scheduler can spread the pods comprising the Service. So we first create the Service by running:
```console
$ kubectl create -f examples/guestbook/redis-master-service.yaml
service "redis-master" created
```
Then check the list of services, which should include the redis-master:
```console
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-master 10.0.76.248 <none> 6379/TCP 1s
```
This will cause all pods to see the redis master apparently running on `<CLUSTER-IP>:<PORT>`. A Service can map an incoming port to any `targetPort` in the backend pod. Once created, the Service proxy on each node is configured to set up a proxy on the specified port (in this case port `6379`).
`targetPort` will default to `port` if it is omitted in the configuration. `targetPort` is the port the container accepts traffic on, and `port` is the abstracted Service port, which can be any port other pods use to access the Service. For simplicity's sake, we omit it in the following configurations.
The traffic flow from slaves to masters can be described in two steps:
- A *redis slave* will connect to `port` on the *redis master Service*
- Traffic will be forwarded from the Service `port` (on the Service node) to the `targetPort` on the pod that the Service listens to.
For more details, please see [Connecting applications](http://kubernetes.io/docs/user-guide/connecting-applications/).
#### Finding a Service
Kubernetes supports two primary modes of finding a Service — environment variables and DNS.
##### Environment variables
The services in a Kubernetes cluster are discoverable inside other containers via [environment variables](https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables).
##### DNS service
An alternative is to use the [cluster's DNS service](https://kubernetes.io/docs/concepts/services-networking/service/#dns), if it has been enabled for the cluster. This lets all pods do name resolution of services automatically, based on the Service name.
This example has been configured to use the DNS service by default.
If your cluster does not have the DNS service enabled, then you can use environment variables by setting the
`GET_HOSTS_FROM` env value in both
[redis-slave-deployment.yaml](redis-slave-deployment.yaml) and [frontend-deployment.yaml](frontend-deployment.yaml)
from `dns` to `env` before you start up the app.
(However, this is unlikely to be necessary. You can check for the DNS service in the list of the cluster's services by
running `kubectl --namespace=kube-system get rc -l k8s-app=kube-dns`.)
Note that switching to env causes creation-order dependencies, since Services need to be created before their clients that require env vars.
#### Create a Deployment
Second, create the redis master pod in your Kubernetes cluster by running:
```console
$ kubectl create -f examples/guestbook/redis-master-deployment.yaml
deployment "redis-master" created
```
You can see the Deployment for your cluster by running:
```console
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
redis-master 1 1 1 1 27s
```
Then, you can list the pods in the cluster, to verify that the master is running:
```console
$ kubectl get pods
```
You'll see all pods in the cluster, including the redis master pod, and the status of each pod.
The name of the redis master will look similar to that in the following list:
```console
NAME READY STATUS RESTARTS AGE
redis-master-2353460263-1ecey 1/1 Running 0 1m
...
```
(Note that an initial `docker pull` to grab a container image may take a few minutes, depending on network conditions. A pod will be reported as `Pending` while its image is being downloaded.)
`kubectl get pods` will show only the pods in the default [namespace](http://kubernetes.io/docs/user-guide/namespaces/). To see pods in all namespaces, run:
```
kubectl get pods --all-namespaces
```
For more details, please see [Configuring containers](http://kubernetes.io/docs/user-guide/configuring-containers/) and [Deploying applications](http://kubernetes.io/docs/user-guide/deploying-applications/).
#### Optional Interlude
You can get information about a pod, including the machine that it is running on, via `kubectl describe pods/<POD-NAME>`. E.g., for the redis master, you should see something like the following (your pod name will be different):
```console
$ kubectl describe pods redis-master-2353460263-1ecey
Name: redis-master-2353460263-1ecey
Node: kubernetes-node-m0k7/10.240.0.5
...
Labels: app=redis,pod-template-hash=2353460263,role=master,tier=backend
Status: Running
IP: 10.244.2.3
Controllers: ReplicaSet/redis-master-2353460263
Containers:
master:
Container ID: docker://76cf8115485966131587958ea3cbe363e2e1dcce129e2e624883f393ce256f6c
Image: gcr.io/google_containers/redis:e2e
Image ID: docker://e5f6c5a2b5646828f51e8e0d30a2987df7e8183ab2c3ed0ca19eaa03cc5db08c
Port: 6379/TCP
...
```
The `Node` is the name and IP of the machine, e.g. `kubernetes-node-m0k7` in the example above. You can find more details about this node with `kubectl describe nodes kubernetes-node-m0k7`.
If you want to view the container logs for a given pod, you can run:
```console
$ kubectl logs <POD-NAME>
```
These logs will usually give you enough information to troubleshoot.
However, if you should want to SSH to the listed host machine, you can inspect various logs there directly as well. For example, with Google Compute Engine, using `gcloud`, you can SSH like this:
```console
me@workstation$ gcloud compute ssh <NODE-NAME>
```
Then, you can look at the Docker containers on the remote machine. You should see something like this (the specifics of the IDs will be different):
```console
me@kubernetes-node-krxw:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
...
0ffef9649265 redis:latest "/entrypoint.sh redi" About a minute ago Up About a minute k8s_master.869d22f3_redis-master-dz33o_default_1449a58a-5ead-11e5-a104-688f84ef8ef6_d74cb2b5
```
If you want to see the logs for a given container, you can run:
```console
$ docker logs <container_id>
```
### Step Two: Start up the redis slave
Now that the redis master is running, we can start up its 'read slaves'.
We'll define these as replicated pods as well, though this time — unlike for the redis master — we'll define the number of replicas to be 2.
In Kubernetes, a Deployment is responsible for managing multiple instances of a replicated pod. The Deployment will automatically launch new pods if the number of replicas falls below the specified number.
(This particular replicated pod is a great one to test this with -- you can try killing the Docker processes for your pods directly, then watch them come back online on a new node shortly thereafter.)
Just like the master, we want to have a Service to proxy connections to the redis slaves. In this case, in addition to discovery, the slave Service will provide transparent load balancing to web app clients.
This time we put the Service and Deployment into one [file](http://kubernetes.io/docs/user-guide/managing-deployments/#organizing-resource-configurations). Grouping related objects together in a single file is often better than having separate files.
The specification for the slaves is in [all-in-one/redis-slave.yaml](all-in-one/redis-slave.yaml):
<!-- BEGIN MUNGE: EXAMPLE all-in-one/redis-slave.yaml -->
```yaml
apiVersion: v1
kind: Service
metadata:
name: redis-slave
labels:
app: redis
role: slave
tier: backend
spec:
ports:
# the port that this service should serve on
- port: 6379
selector:
app: redis
role: slave
tier: backend
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis-slave
# these labels can be applied automatically
# from the labels in the pod template if not set
# labels:
# app: redis
# role: slave
# tier: backend
spec:
# this replicas value is default
# modify it according to your case
replicas: 2
# selector can be applied automatically
# from the labels in the pod template if not set
# selector:
# matchLabels:
# app: guestbook
# role: slave
# tier: backend
template:
metadata:
labels:
app: redis
role: slave
tier: backend
spec:
containers:
- name: slave
image: gcr.io/google_samples/gb-redisslave:v1
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
# If your cluster config does not include a dns service, then to
# instead access an environment variable to find the master
# service's host, comment out the 'value: dns' line above, and
# uncomment the line below.
# value: env
ports:
- containerPort: 6379
```
[Download example](all-in-one/redis-slave.yaml?raw=true)
<!-- END MUNGE: EXAMPLE all-in-one/redis-slave.yaml -->
This time the selector for the Service is `app=redis,role=slave,tier=backend`, because that identifies the pods running redis slaves. It is generally helpful to set labels on your Service itself as we've done here to make it easy to locate them with the `kubectl get services -l "app=redis,role=slave,tier=backend"` command. For more information on the usage of labels, see [using-labels-effectively](http://kubernetes.io/docs/user-guide/managing-deployments/#using-labels-effectively).
Now that you have created the specification, create the Service in your cluster by running:
```console
$ kubectl create -f examples/guestbook/all-in-one/redis-slave.yaml
service "redis-slave" created
deployment "redis-slave" created
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-master 10.0.76.248 <none> 6379/TCP 20m
redis-slave 10.0.112.188 <none> 6379/TCP 16s
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
redis-master 1 1 1 1 22m
redis-slave 2 2 2 2 2m
```
Once the Deployment is up, you can list the pods in the cluster, to verify that the master and slaves are running. You should see a list that includes something like the following:
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-master-2353460263-1ecey 1/1 Running 0 35m
redis-slave-1691881626-dlf5f 1/1 Running 0 15m
redis-slave-1691881626-sfn8t 1/1 Running 0 15m
```
You should see a single redis master pod and two redis slave pods. As mentioned above, you can get more information about any pod with: `kubectl describe pods/<POD_NAME>`. And also can view the resources on [kube-ui](http://kubernetes.io/docs/user-guide/ui/).
### Step Three: Start up the guestbook frontend
A frontend pod is a simple PHP server that is configured to talk to either the slave or master services, depending on whether the client request is a read or a write. It exposes a simple AJAX interface, and serves an Angular-based UX.
Again we'll create a set of replicated frontend pods instantiated by a Deployment — this time, with three replicas.
As with the other pods, we now want to create a Service to group the frontend pods.
The Deployment and Service are described in the file [all-in-one/frontend.yaml](all-in-one/frontend.yaml):
<!-- BEGIN MUNGE: EXAMPLE all-in-one/frontend.yaml -->
```yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
# type: LoadBalancer
ports:
# the port that this service should serve on
- port: 80
selector:
app: guestbook
tier: frontend
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend
# these labels can be applied automatically
# from the labels in the pod template if not set
# labels:
# app: guestbook
# tier: frontend
spec:
# this replicas value is default
# modify it according to your case
replicas: 3
# selector can be applied automatically
# from the labels in the pod template if not set
# selector:
# matchLabels:
# app: guestbook
# tier: frontend
template:
metadata:
labels:
app: guestbook
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google-samples/gb-frontend:v4
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
# If your cluster config does not include a dns service, then to
# instead access environment variables to find service host
# info, comment out the 'value: dns' line above, and uncomment the
# line below.
# value: env
ports:
- containerPort: 80
```
[Download example](all-in-one/frontend.yaml?raw=true)
<!-- END MUNGE: EXAMPLE all-in-one/frontend.yaml -->
#### Using 'type: LoadBalancer' for the frontend service (cloud-provider-specific)
For supported cloud providers, such as Google Compute Engine or Google Container Engine, you can specify to use an external load balancer
in the service `spec`, to expose the service onto an external load balancer IP.
To do this, uncomment the `type: LoadBalancer` line in the [all-in-one/frontend.yaml](all-in-one/frontend.yaml) file before you start the service.
[See the appendix below](#appendix-accessing-the-guestbook-site-externally) on accessing the guestbook site externally for more details.
Create the service and Deployment like this:
```console
$ kubectl create -f examples/guestbook/all-in-one/frontend.yaml
service "frontend" created
deployment "frontend" created
```
Then, list all your services again:
```console
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend 10.0.63.63 <none> 80/TCP 1m
redis-master 10.0.76.248 <none> 6379/TCP 39m
redis-slave 10.0.112.188 <none> 6379/TCP 19m
```
Also list all your Deployments:
```console
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
frontend 3 3 3 3 2m
redis-master 1 1 1 1 39m
redis-slave 2 2 2 2 20m
```
Once it's up, i.e. when desired replicas match current replicas (again, it may take up to thirty seconds to create the pods), you can list the pods with specified labels in the cluster, to verify that the master, slaves and frontends are all running. You should see a list containing pods with label 'tier' like the following:
```console
$ kubectl get pods -L tier
NAME READY STATUS RESTARTS AGE TIER
frontend-1211764471-4e1j2 1/1 Running 0 4m frontend
frontend-1211764471-gkbkv 1/1 Running 0 4m frontend
frontend-1211764471-rk1cf 1/1 Running 0 4m frontend
redis-master-2353460263-1ecey 1/1 Running 0 42m backend
redis-slave-1691881626-dlf5f 1/1 Running 0 22m backend
redis-slave-1691881626-sfn8t 1/1 Running 0 22m backend
```
You should see a single redis master pod, two redis slaves, and three frontend pods.
The code for the PHP server that the frontends are running is in `examples/guestbook/php-redis/guestbook.php`. It looks like this:
```php
<?
set_include_path('.:/usr/local/lib/php');
error_reporting(E_ALL);
ini_set('display_errors', 1);
require 'Predis/Autoloader.php';
Predis\Autoloader::register();
if (isset($_GET['cmd']) === true) {
$host = 'redis-master';
if (getenv('GET_HOSTS_FROM') == 'env') {
$host = getenv('REDIS_MASTER_SERVICE_HOST');
}
header('Content-Type: application/json');
if ($_GET['cmd'] == 'set') {
$client = new Predis\Client([
'scheme' => 'tcp',
'host' => $host,
'port' => 6379,
]);
$client->set($_GET['key'], $_GET['value']);
print('{"message": "Updated"}');
} else {
$host = 'redis-slave';
if (getenv('GET_HOSTS_FROM') == 'env') {
$host = getenv('REDIS_SLAVE_SERVICE_HOST');
}
$client = new Predis\Client([
'scheme' => 'tcp',
'host' => $host,
'port' => 6379,
]);
$value = $client->get($_GET['key']);
print('{"data": "' . $value . '"}');
}
} else {
phpinfo();
} ?>
```
Note the use of the `redis-master` and `redis-slave` host names -- we're finding those Services via the Kubernetes cluster's DNS service, as discussed above. All the frontend replicas will write to the load-balancing redis-slaves service, which can be highly replicated as well.
### Step Four: Cleanup
If you are in a live Kubernetes cluster, you can just kill the pods by deleting the Deployments and Services. Using labels to select the resources to delete is an easy way to do this in one command.
```console
$ kubectl delete deployments,services -l "app in (redis, guestbook)"
```
To completely tear down a Kubernetes cluster, if you ran this from source, you can use:
```console
$ <kubernetes>/cluster/kube-down.sh
```
### Troubleshooting
If you are having trouble bringing up your guestbook app, double check that your external IP is properly defined for your frontend Service, and that the firewall for your cluster nodes is open to port 80.
Then, see the [troubleshooting documentation](http://kubernetes.io/docs/troubleshooting/) for a further list of common issues and how you can diagnose them.
### Appendix: Accessing the guestbook site externally
You'll want to set up your guestbook Service so that it can be accessed from outside of the internal Kubernetes network. Above, we introduced one way to do that, by setting `type: LoadBalancer` to Service `spec`.
More generally, Kubernetes supports two ways of exposing a Service onto an external IP address: `NodePort`s and `LoadBalancer`s , as described [here](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types).
If the `LoadBalancer` specification is used, it can take a short period for an external IP to show up in `kubectl get services` output, but you should then see it listed as well, e.g. like this:
```console
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend 10.0.63.63 23.236.59.54 80/TCP 1m
redis-master 10.0.76.248 <none> 6379/TCP 39m
redis-slave 10.0.112.188 <none> 6379/TCP 19m
```
Once you've exposed the service to an external IP, visit the IP to see your guestbook in action, i.e. `http://<EXTERNAL-IP>:<PORT>`.
You should see a web page that looks something like this (without the messages). Try adding some entries to it!
<img width="50%" src="http://amy-jo.storage.googleapis.com/images/gb_k8s_ex1.png">
If you are more advanced in the ops arena, you can also manually get the service IP from looking at the output of `kubectl get pods,services`, and modify your firewall using standard tools and services (firewalld, iptables, selinux) which you are already familiar with.
#### Google Compute Engine External Load Balancer Specifics
In Google Compute Engine, Kubernetes automatically creates forwarding rules for services with `LoadBalancer`.
You can list the forwarding rules like this (the forwarding rule also indicates the external IP):
```console
$ gcloud compute forwarding-rules list
NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
frontend us-central1 130.211.188.51 TCP us-central1/targetPools/frontend
```
In Google Compute Engine, you also may need to open the firewall for port 80 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-node` (replace with your tags as appropriate):
```console
$ gcloud compute firewall-rules create --allow=tcp:80 --target-tags=kubernetes-node kubernetes-node-80
```
For GCE Kubernetes startup details, see the [Getting started on Google Compute Engine](http://kubernetes.io/docs/getting-started-guides/gce/)
For Google Compute Engine details about limiting traffic to specific sources, see the [Google Compute Engine firewall documentation][gce-firewall-docs].
[cloud-console]: https://console.developer.google.com
[gce-firewall-docs]: https://cloud.google.com/compute/docs/networking#firewalls
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/guestbook/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/guestbook/README.md](https://github.com/kubernetes/examples/blob/master/guestbook/README.md)

View File

@ -1,129 +1 @@
# Nginx https service
This example creates a basic nginx https service useful in verifying proof of concept, keys, secrets, configmap, and end-to-end https service creation in kubernetes.
It uses an [nginx server block](http://wiki.nginx.org/ServerBlockExample) to serve the index page over both http and https. It will detect changes to nginx's configuration file, default.conf, mounted as a configmap volume and reload nginx automatically.
### Generate certificates
First generate a self signed rsa key and certificate that the server can use for TLS. This step invokes the make_secret.go script in the same directory, which uses the kubernetes api to generate a secret json config in /tmp/secret.json.
```sh
$ make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json
```
### Create a https nginx application running in a kubernetes cluster
You need a [running kubernetes cluster](https://kubernetes.io/docs/setup/pick-right-solution/) for this to work.
Create a secret and a configmap.
```sh
$ kubectl create -f /tmp/secret.json
secret "nginxsecret" created
$ kubectl create configmap nginxconfigmap --from-file=examples/https-nginx/default.conf
configmap "nginxconfigmap" created
```
Create a service and a replication controller using the configuration in nginx-app.yaml.
```sh
$ kubectl create -f examples/https-nginx/nginx-app.yaml
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:32211,tcp:30028) to serve traffic.
...
service "nginxsvc" created
replicationcontroller "my-nginx" created
```
Then, find the node port that Kubernetes is using for http and https traffic.
```sh
$ kubectl get service nginxsvc -o json
...
{
"name": "http",
"protocol": "TCP",
"port": 80,
"targetPort": 80,
"nodePort": 32211
},
{
"name": "https",
"protocol": "TCP",
"port": 443,
"targetPort": 443,
"nodePort": 30028
}
...
```
If you are using Kubernetes on a cloud provider, you may need to create cloud firewall rules to serve traffic.
If you are using GCE or GKE, you can use the following commands to add firewall rules.
```sh
$ gcloud compute firewall-rules create allow-nginx-http --allow tcp:32211 --description "Incoming http allowed."
Created [https://www.googleapis.com/compute/v1/projects/hello-world-job/global/firewalls/allow-nginx-http].
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS
allow-nginx-http default 0.0.0.0/0 tcp:32211
$ gcloud compute firewall-rules create allow-nginx-https --allow tcp:30028 --description "Incoming https allowed."
Created [https://www.googleapis.com/compute/v1/projects/hello-world-job/global/firewalls/allow-nginx-https].
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS
allow-nginx-https default 0.0.0.0/0 tcp:30028
```
Find your nodes' IPs.
```sh
$ kubectl get nodes -o json | grep ExternalIP -A 2
"type": "ExternalIP",
"address": "104.198.1.26"
}
--
"type": "ExternalIP",
"address": "104.198.12.158"
}
--
"type": "ExternalIP",
"address": "104.198.11.137"
}
```
Now your service is up. You can either use your browser or type the following commands.
```sh
$ curl https://<your-node-ip>:<your-port> -k
$ curl https://104.198.1.26:30028 -k
...
<title>Welcome to nginx!</title>
...
```
Then we will update the configmap by changing `index.html` to `index2.html`.
```sh
kubectl create configmap nginxconfigmap --from-file=examples/https-nginx/default.conf -o yaml --dry-run\
| sed 's/index.html/index2.html/g' | kubectl apply -f -
configmap "nginxconfigmap" configured
```
Wait a few seconds to let the change propagate. Now you should be able to either use your browser or type the following commands to verify Nginx has been reloaded with new configuration.
```sh
$ curl https://<your-node-ip>:<your-port> -k
$ curl https://104.198.1.26:30028 -k
...
<title>Nginx reloaded!</title>
...
```
For more information on how to run this in a kubernetes cluster, please see the [user-guide](https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/https-nginx/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/https-nginx/README.md](https://github.com/kubernetes/examples/blob/master/staging/https-nginx/README.md)

View File

@ -1,134 +1 @@
## Java EE Application using WildFly and MySQL
The following document describes the deployment of a Java EE application using [WildFly](http://wildfly.org) application server and MySQL database server on Kubernetes. The sample application source code is at: https://github.com/javaee-samples/javaee7-simple-sample.
### Prerequisites
https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/prereqs.md
### Start MySQL Pod
In Kubernetes a [_Pod_](https://kubernetes.io/docs/user-guide/pods.md) is the smallest deployable unit that can be created, scheduled, and managed. It's a collocated group of containers that share an IP and storage volume.
Here is the config for MySQL pod: [mysql-pod.yaml](mysql-pod.yaml)
<!-- BEGIN MUNGE: mysql-pod.yaml -->
<!-- END MUNGE: EXAMPLE -->
Create the MySQL pod:
```sh
kubectl create -f examples/javaee/mysql-pod.yaml
```
Check status of the pod:
```sh
kubectl get -w po
NAME READY STATUS RESTARTS AGE
mysql-pod 0/1 Pending 0 4s
NAME READY STATUS RESTARTS AGE
mysql-pod 0/1 Running 0 44s
mysql-pod 1/1 Running 0 44s
```
Wait for the status to `1/1` and `Running`.
### Start MySQL Service
We are creating a [_Service_](https://kubernetes.io/docs/user-guide/services.md) to expose the TCP port of the MySQL server. A Service distributes traffic across a set of Pods. The order of Service and the targeted Pods does not matter. However Service needs to be started before any other Pods consuming the Service are started.
In this application, we will use a Kubernetes Service to provide a discoverable endpoints for the MySQL endpoint in the cluster. MySQL service target pods with the labels `name: mysql-pod` and `context: docker-k8s-lab`.
Here is definition of the MySQL service: [mysql-service.yaml](mysql-service.yaml)
<!-- BEGIN MUNGE: mysql-service.yaml -->
<!-- END MUNGE: EXAMPLE -->
Create this service:
```sh
kubectl create -f examples/javaee/mysql-service.yaml
```
Get status of the service:
```sh
kubectl get -w svc
NAME LABELS SELECTOR IP(S) PORT(S)
kubernetes component=apiserver,provider=kubernetes <none> 10.247.0.1 443/TCP
mysql-service context=docker-k8s-lab,name=mysql-pod context=docker-k8s-lab,name=mysql-pod 10.247.63.43 3306/TCP
```
If multiple services are running, then it can be narrowed by specifying labels:
```sh
kubectl get -w po -l context=docker-k8s-lab,name=mysql-pod
NAME READY STATUS RESTARTS AGE
mysql-pod 1/1 Running 0 4m
```
This is also the selector label used by service to target pods.
When a Service is run on a node, the kubelet adds a set of environment variables for each active Service. It supports both Docker links compatible variables and simpler `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables, where the Service name is upper-cased and dashes are converted to underscores.
Our service name is ``mysql-service'' and so ``MYSQL_SERVICE_SERVICE_HOST'' and ``MYSQL_SERVICE_SERVICE_PORT'' variables are available to other pods. This host and port variables are then used to create the JDBC resource in WildFly.
### Start WildFly Replication Controller
WildFly is a lightweight Java EE 7 compliant application server. It is wrapped in a Replication Controller and used as the Java EE runtime.
In Kubernetes a [_Replication Controller_](https://kubernetes.io/docs/user-guide/replication-controller.md) is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a service it also has a desired number of replicas, and it will create or delete pods to ensure that the number of pods matches up with it's desired state.
Here is definition of the MySQL service: [wildfly-rc.yaml](wildfly-rc.yaml).
<!-- BEGIN MUNGE: wildfly-rc.yaml -->
<!-- END MUNGE: EXAMPLE -->
Create this controller:
```sh
kubectl create -f examples/javaee/wildfly-rc.yaml
```
Check status of the pod inside replication controller:
```sh
kubectl get po
NAME READY STATUS RESTARTS AGE
mysql-pod 1/1 Running 0 1h
wildfly-rc-w2kk5 1/1 Running 0 6m
```
### Access the application
Get IP address of the pod:
```sh
kubectl get -o template po wildfly-rc-w2kk5 --template={{.status.podIP}}
10.246.1.23
```
Log in to node and access the application:
```sh
vagrant ssh node-1
Last login: Thu Jul 16 00:24:36 2015 from 10.0.2.2
[vagrant@kubernetes-node-1 ~]$ curl http://10.246.1.23:8080/employees/resources/employees/
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><collection><employee><id>1</id><name>Penny</name></employee><employee><id>2</id><name>Sheldon</name></employee><employee><id>3</id><name>Amy</name></employee><employee><id>4</id><name>Leonard</name></employee><employee><id>5</id><name>Bernadette</name></employee><employee><id>6</id><name>Raj</name></employee><employee><id>7</id><name>Howard</name></employee><employee><id>8</id><name>Priya</name></employee></collection>
```
### Delete resources
All resources created in this application can be deleted:
```sh
kubectl delete -f examples/javaee/mysql-pod.yaml
kubectl delete -f examples/javaee/mysql-service.yaml
kubectl delete -f examples/javaee/wildfly-rc.yaml
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/javaee/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/javaee/README.md](https://github.com/kubernetes/examples/blob/master/staging/javaee/README.md)

View File

@ -1,185 +1 @@
## Java Web Application with Tomcat and Sidecar Container
The following document describes the deployment of a Java Web application using Tomcat. Instead of packaging `war` file inside the Tomcat image or mount the `war` as a volume, we use a sidecar container as `war` file provider.
### Prerequisites
https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/prereqs.md
### Overview
This sidecar mode brings a new workflow for Java users:
![](workflow.png?raw=true "Workflow")
As you can see, user can create a `sample:v2` container as sidecar to "provide" war file to Tomcat by copying it to the shared `emptyDir` volume. And Pod will make sure the two containers compose an "atomic" scheduling unit, which is perfect for this case. Thus, your application version management will be totally separated from web server management.
For example, if you are going to change the configurations of your Tomcat:
```console
$ docker exec -it <tomcat_container_id> /bin/bash
# make some change, and then commit it to a new image
$ docker commit <tomcat_container_id> mytomcat:7.0-dev
```
Done! The new Tomcat image **will not** mess up with your `sample.war` file. You can re-use your tomcat image with lots of different war container images for lots of different apps without having to build lots of different images.
Also this means that rolling out a new Tomcat to patch security or whatever else, doesn't require rebuilding N different images.
**Why not put my `sample.war` in a host dir and mount it to tomcat container?**
You have to **manage the volumes** in this case, for example, when you restart or scale the pod on another node, your contents is not ready on that host.
Generally, we have to set up a distributed file system (NFS at least) volume to solve this (if we do not have GCE PD volume). But this is generally unnecessary.
### How To Set this Up
In Kubernetes a [_Pod_](https://kubernetes.io/docs/user-guide/pods.md) is the smallest deployable unit that can be created, scheduled, and managed. It's a collocated group of containers that share an IP and storage volume.
Here is the config [javaweb.yaml](javaweb.yaml) for Java Web pod:
NOTE: you should define `war` container **first** as it is the "provider".
<!-- BEGIN MUNGE: javaweb.yaml -->
```
apiVersion: v1
kind: Pod
metadata:
name: javaweb
spec:
containers:
- image: resouer/sample:v1
name: war
volumeMounts:
- mountPath: /app
name: app-volume
- image: resouer/mytomcat:7.0
name: tomcat
command: ["sh","-c","/root/apache-tomcat-7.0.42-v2/bin/start.sh"]
volumeMounts:
- mountPath: /root/apache-tomcat-7.0.42-v2/webapps
name: app-volume
ports:
- containerPort: 8080
hostPort: 8001
volumes:
- name: app-volume
emptyDir: {}
```
<!-- END MUNGE: EXAMPLE -->
The only magic here is the `resouer/sample:v1` image:
```
FROM busybox:latest
ADD sample.war sample.war
CMD "sh" "mv.sh"
```
And the contents of `mv.sh` is:
```sh
cp /sample.war /app
tail -f /dev/null
```
#### Explanation
1. 'war' container only contains the `war` file of your app
2. 'war' container's CMD tries to copy `sample.war` to the `emptyDir` volume path
3. The last line of `tail -f` is just used to hold the container, as Replication Controller does not support one-off task
4. 'tomcat' container will load the `sample.war` from volume path
What's more, if you don't want to enclose a build-in `mv.sh` script in the `war` container, you can use Pod lifecycle handler to do the copy work, here's a example [javaweb-2.yaml](javaweb-2.yaml):
<!-- BEGIN MUNGE: javaweb-2.yaml -->
```
apiVersion: v1
kind: Pod
metadata:
name: javaweb-2
spec:
containers:
- image: resouer/sample:v2
name: war
lifecycle:
postStart:
exec:
command:
- "cp"
- "/sample.war"
- "/app"
volumeMounts:
- mountPath: /app
name: app-volume
- image: resouer/mytomcat:7.0
name: tomcat
command: ["sh","-c","/root/apache-tomcat-7.0.42-v2/bin/start.sh"]
volumeMounts:
- mountPath: /root/apache-tomcat-7.0.42-v2/webapps
name: app-volume
ports:
- containerPort: 8080
hostPort: 8001
volumes:
- name: app-volume
emptyDir: {}
```
<!-- END MUNGE: EXAMPLE -->
And the `resouer/sample:v2` Dockerfile is quite simple:
```
FROM busybox:latest
ADD sample.war sample.war
CMD "tail" "-f" "/dev/null"
```
#### Explanation
1. 'war' container only contains the `war` file of your app
2. 'war' container's CMD uses `tail -f` to hold the container, nothing more
3. The `postStart` lifecycle handler will do `cp` after the `war` container is started
4. Again 'tomcat' container will load the `sample.war` from volume path
Done! Now your `war` container contains nothing except `sample.war`, clean enough.
### Test It Out
Create the Java web pod:
```console
$ kubectl create -f examples/javaweb-tomcat-sidecar/javaweb-2.yaml
```
Check status of the pod:
```console
$ kubectl get -w po
NAME READY STATUS RESTARTS AGE
javaweb-2 2/2 Running 0 7s
```
Wait for the status to `2/2` and `Running`. Then you can visit "Hello, World" page on `http://localhost:8001/sample/index.html`
You can also test `javaweb.yaml` in the same way.
### Delete Resources
All resources created in this application can be deleted:
```console
$ kubectl delete -f examples/javaweb-tomcat-sidecar/javaweb-2.yaml
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/javaweb-tomcat-sidecar/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/javaweb-tomcat-sidecar/README.md](https://github.com/kubernetes/examples/blob/master/staging/javaweb-tomcat-sidecar/README.md)

View File

@ -1,7 +1 @@
This file has moved to: http://kubernetes.io/docs/user-guide/jobs/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/job/expansions/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/job/expansions/README.md](https://github.com/kubernetes/examples/blob/master/staging/job/expansions/README.md)

View File

@ -1,7 +1 @@
This file has moved to: http://kubernetes.io/docs/user-guide/jobs/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/job/work-queue-1/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/job/work-queue-1/README.md](https://github.com/kubernetes/examples/blob/master/staging/job/work-queue-1/README.md)

View File

@ -1,7 +1 @@
This file has moved to: http://kubernetes.io/docs/user-guide/jobs/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/job/work-queue-2/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/job/work-queue-2/README.md](https://github.com/kubernetes/examples/blob/master/staging/job/work-queue-2/README.md)

View File

@ -1,31 +1 @@
To access the Kubernetes API [from a Pod](https://kubernetes.io/docs/user-guide/accessing-the-cluster.md#accessing-the-api-from-a-pod) one of the solution is to run `kubectl proxy` in a so-called sidecar container within the Pod. To do this, you need to package `kubectl` in a container. It is useful when service accounts are being used for accessing the API and the old no-auth KUBERNETES_RO service is not available. Since all containers in a Pod share the same network namespace, containers will be able to reach the API on localhost.
This example contains a [Dockerfile](Dockerfile) and [Makefile](Makefile) for packaging up `kubectl` into
a container and pushing the resulting container image on the Google Container Registry. You can modify the Makefile to push to a different registry if needed.
Assuming that you have checked out the Kubernetes source code and setup your environment to be able to build it. The typical build step of this kubectl container will be:
$ cd examples/kubectl-container
$ make kubectl
$ make tag
$ make container
$ make push
It is not currently automated as part of a release process, so for the moment
this is an example of what to do if you want to package `kubectl` into a
container and use it within a pod.
In the future, we may release consistently versioned groups of containers when
we cut a release, in which case the source of gcr.io/google_containers/kubectl
would become that automated process.
[```pod.json```](pod.json) is provided as an example of running `kubectl` as a sidecar
container in a Pod, and to help you verify that `kubectl` works correctly in
this configuration. To launch this Pod, you will need a configured Kubernetes endpoint and `kubectl` installed locally, then simply create the Pod:
$ kubectl create -f pod.json
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/kubectl-container/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/kubectl-container/README.md](https://github.com/kubernetes/examples/blob/master/staging/kubectl-container/README.md)

View File

@ -1,215 +1 @@
Meteor on Kubernetes
====================
This example shows you how to package and run a
[Meteor](https://www.meteor.com/) app on Kubernetes.
Get started on Google Compute Engine
------------------------------------
Meteor uses MongoDB, and we will use the `GCEPersistentDisk` type of
volume for persistent storage. Therefore, this example is only
applicable to [Google Compute
Engine](https://cloud.google.com/compute/). Take a look at the
[volumes documentation](https://kubernetes.io/docs/user-guide/volumes.md) for other options.
First, if you have not already done so:
1. [Create](https://cloud.google.com/compute/docs/quickstart) a
[Google Cloud Platform](https://cloud.google.com/) project.
2. [Enable
billing](https://developers.google.com/console/help/new/#billing).
3. Install the [gcloud SDK](https://cloud.google.com/sdk/).
Authenticate with gcloud and set the gcloud default project name to
point to the project you want to use for your Kubernetes cluster:
```sh
gcloud auth login
gcloud config set project <project-name>
```
Next, start up a Kubernetes cluster:
```sh
wget -q -O - https://get.k8s.io | bash
```
Please see the [Google Compute Engine getting started
guide](https://kubernetes.io/docs/getting-started-guides/gce.md) for full
details and other options for starting a cluster.
Build a container for your Meteor app
-------------------------------------
To be able to run your Meteor app on Kubernetes you need to build a
Docker container for it first. To do that you need to install
[Docker](https://www.docker.com) Once you have that you need to add 2
files to your existing Meteor project `Dockerfile` and
`.dockerignore`.
`Dockerfile` should contain the below lines. You should replace the
`ROOT_URL` with the actual hostname of your app.
```
FROM chees/meteor-kubernetes
ENV ROOT_URL http://myawesomeapp.com
```
The `.dockerignore` file should contain the below lines. This tells
Docker to ignore the files on those directories when it's building
your container.
```
.meteor/local
packages/*/.build*
```
You can see an example meteor project already set up at:
[meteor-gke-example](https://github.com/Q42/meteor-gke-example). Feel
free to use this app for this example.
> Note: The next step will not work if you have added mobile platforms
> to your meteor project. Check with `meteor list-platforms`
Now you can build your container by running this in
your Meteor project directory:
```
docker build -t my-meteor .
```
Pushing to a registry
---------------------
For the [Docker Hub](https://hub.docker.com/), tag your app image with
your username and push to the Hub with the below commands. Replace
`<username>` with your Hub username.
```
docker tag my-meteor <username>/my-meteor
docker push <username>/my-meteor
```
For [Google Container
Registry](https://cloud.google.com/tools/container-registry/), tag
your app image with your project ID, and push to GCR. Replace
`<project>` with your project ID.
```
docker tag my-meteor gcr.io/<project>/my-meteor
gcloud docker -- push gcr.io/<project>/my-meteor
```
Running
-------
Now that you have containerized your Meteor app it's time to set up
your cluster. Edit [`meteor-controller.json`](meteor-controller.json)
and make sure the `image:` points to the container you just pushed to
the Docker Hub or GCR.
We will need to provide MongoDB a persistent Kubernetes volume to
store its data. See the [volumes documentation](https://kubernetes.io/docs/user-guide/volumes.md) for
options. We're going to use Google Compute Engine persistent
disks. Create the MongoDB disk by running:
```
gcloud compute disks create --size=200GB mongo-disk
```
Now you can start Mongo using that disk:
```
kubectl create -f examples/meteor/mongo-pod.json
kubectl create -f examples/meteor/mongo-service.json
```
Wait until Mongo is started completely and then start up your Meteor app:
```
kubectl create -f examples/meteor/meteor-service.json
kubectl create -f examples/meteor/meteor-controller.json
```
Note that [`meteor-service.json`](meteor-service.json) creates a load balancer, so
your app should be available through the IP of that load balancer once
the Meteor pods are started. We also created the service before creating the rc to
aid the scheduler in placing pods, as the scheduler ranks pod placement according to
service anti-affinity (among other things). You can find the IP of your load balancer
by running:
```
kubectl get service meteor --template="{{range .status.loadBalancer.ingress}} {{.ip}} {{end}}"
```
You will have to open up port 80 if it's not open yet in your
environment. On Google Compute Engine, you may run the below command.
```
gcloud compute firewall-rules create meteor-80 --allow=tcp:80 --target-tags kubernetes-node
```
What is going on?
-----------------
Firstly, the `FROM chees/meteor-kubernetes` line in your `Dockerfile`
specifies the base image for your Meteor app. The code for that image
is located in the `dockerbase/` subdirectory. Open up the `Dockerfile`
to get an insight of what happens during the `docker build` step. The
image is based on the Node.js official image. It then installs Meteor
and copies in your apps' code. The last line specifies what happens
when your app container is run.
```sh
ENTRYPOINT MONGO_URL=mongodb://$MONGO_SERVICE_HOST:$MONGO_SERVICE_PORT /usr/local/bin/node main.js
```
Here we can see the MongoDB host and port information being passed
into the Meteor app. The `MONGO_SERVICE...` environment variables are
set by Kubernetes, and point to the service named `mongo` specified in
[`mongo-service.json`](mongo-service.json). See the [environment
documentation](https://kubernetes.io/docs/user-guide/container-environment.md) for more details.
As you may know, Meteor uses long lasting connections, and requires
_sticky sessions_. With Kubernetes you can scale out your app easily
with session affinity. The
[`meteor-service.json`](meteor-service.json) file contains
`"sessionAffinity": "ClientIP"`, which provides this for us. See the
[service
documentation](https://kubernetes.io/docs/user-guide/services.md#virtual-ips-and-service-proxies) for
more information.
As mentioned above, the mongo container uses a volume which is mapped
to a persistent disk by Kubernetes. In [`mongo-pod.json`](mongo-pod.json) the container
section specifies the volume:
```json
{
"volumeMounts": [
{
"name": "mongo-disk",
"mountPath": "/data/db"
}
```
The name `mongo-disk` refers to the volume specified outside the
container section:
```json
{
"volumes": [
{
"name": "mongo-disk",
"gcePersistentDisk": {
"pdName": "mongo-disk",
"fsType": "ext4"
}
}
],
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/meteor/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/meteor/README.md](https://github.com/kubernetes/examples/blob/master/staging/meteor/README.md)

View File

@ -1,14 +1 @@
Building the meteor-kubernetes base image
-----------------------------------------
As a normal user you don't need to do this since the image is already built and pushed to Docker Hub. You can just use it as a base image. See [this example](https://github.com/Q42/meteor-gke-example/blob/master/Dockerfile).
To build and push the base meteor-kubernetes image:
docker build -t chees/meteor-kubernetes .
docker push chees/meteor-kubernetes
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/meteor/dockerbase/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/meteor/dockerbase/README.md](https://github.com/kubernetes/examples/blob/master/staging/meteor/dockerbase/README.md)

View File

@ -1,51 +1 @@
# MySQL installation with cinder volume plugin
Cinder is a Block Storage service for OpenStack. This example shows how it can be used as an attachment mounted to a pod in Kubernets.
### Prerequisites
Start kubelet with cloud provider as openstack with a valid cloud config
Sample cloud_config:
```
[Global]
auth-url=https://os-identity.vip.foo.bar.com:5443/v2.0
username=user
password=pass
region=region1
tenant-id=0c331a1df18571594d49fe68asa4e
```
Currently the cinder volume plugin is designed to work only on linux hosts and offers ext4 and ext3 as supported fs types
Make sure that kubelet host machine has the following executables
```
/bin/lsblk -- To Find out the fstype of the volume
/sbin/mkfs.ext3 and /sbin/mkfs.ext4 -- To format the volume if required
/usr/bin/udevadm -- To probe the volume attached so that a symlink is created under /dev/disk/by-id/ with a virtio- prefix
```
Ensure cinder is installed and configured properly in the region in which kubelet is spun up
### Example
Create a cinder volume Ex:
`cinder create --display-name=test-repo 2`
Use the id of the cinder volume created to create a pod [definition](mysql.yaml)
Create a new pod with the definition
`cluster/kubectl.sh create -f examples/mysql-cinder-pd/mysql.yaml`
This should now
1. Attach the specified volume to the kubelet's host machine
2. Format the volume if required (only if the volume specified is not already formatted to the fstype specified)
3. Mount it on the kubelet's host machine
4. Spin up a container with this volume mounted to the path specified in the pod definition
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/mysql-cinder-pd/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/mysql-cinder-pd/README.md](https://github.com/kubernetes/examples/blob/master/staging/mysql-cinder-pd/README.md)

View File

@ -1,364 +1 @@
# Persistent Installation of MySQL and WordPress on Kubernetes
This example describes how to run a persistent installation of
[WordPress](https://wordpress.org/) and
[MySQL](https://www.mysql.com/) on Kubernetes. We'll use the
[mysql](https://registry.hub.docker.com/_/mysql/) and
[wordpress](https://registry.hub.docker.com/_/wordpress/) official
[Docker](https://www.docker.com/) images for this installation. (The
WordPress image includes an Apache server).
Demonstrated Kubernetes Concepts:
* [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) to
define persistent disks (disk lifecycle not tied to the Pods).
* [Services](https://kubernetes.io/docs/concepts/services-networking/service/) to enable Pods to
locate one another.
* [External Load Balancers](https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer)
to expose Services externally.
* [Deployments](http://kubernetes.io/docs/user-guide/deployments/) to ensure Pods
stay up and running.
* [Secrets](http://kubernetes.io/docs/user-guide/secrets/) to store sensitive
passwords.
## Quickstart
Put your desired MySQL password in a file called `password.txt` with
no trailing newline. The first `tr` command will remove the newline if
your editor added one.
**Note:** if your cluster enforces **_selinux_** and you will be using [Host Path](#host-path) for storage, then please follow this [extra step](#selinux).
```shell
tr --delete '\n' <password.txt >.strippedpassword.txt && mv .strippedpassword.txt password.txt
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/mysql-wordpress-pd/local-volumes.yaml
kubectl create secret generic mysql-pass --from-file=password.txt
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/mysql-wordpress-pd/mysql-deployment.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/mysql-wordpress-pd/wordpress-deployment.yaml
```
## Table of Contents
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Persistent Installation of MySQL and WordPress on Kubernetes](#persistent-installation-of-mysql-and-wordpress-on-kubernetes)
- [Quickstart](#quickstart)
- [Table of Contents](#table-of-contents)
- [Cluster Requirements](#cluster-requirements)
- [Decide where you will store your data](#decide-where-you-will-store-your-data)
- [Host Path](#host-path)
- [SELinux](#selinux)
- [GCE Persistent Disk](#gce-persistent-disk)
- [Create the MySQL Password Secret](#create-the-mysql-password-secret)
- [Deploy MySQL](#deploy-mysql)
- [Deploy WordPress](#deploy-wordpress)
- [Visit your new WordPress blog](#visit-your-new-wordpress-blog)
- [Take down and restart your blog](#take-down-and-restart-your-blog)
- [Next Steps](#next-steps)
<!-- END MUNGE: GENERATED_TOC -->
## Cluster Requirements
Kubernetes runs in a variety of environments and is inherently
modular. Not all clusters are the same. These are the requirements for
this example.
* Kubernetes version 1.2 is required due to using newer features, such
at PV Claims and Deployments. Run `kubectl version` to see your
cluster version.
* [Cluster DNS](https://github.com/kubernetes/dns) will be used for service discovery.
* An [external load balancer](https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer)
will be used to access WordPress.
* [Persistent Volume Claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims)
are used. You must create Persistent Volumes in your cluster to be
claimed. This example demonstrates how to create two types of
volumes, but any volume is sufficient.
Consult a
[Getting Started Guide](http://kubernetes.io/docs/getting-started-guides/)
to set up a cluster and the
[kubectl](http://kubernetes.io/docs/user-guide/prereqs/) command-line client.
## Decide where you will store your data
MySQL and WordPress will each use a
[Persistent Volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
to store their data. We will use a Persistent Volume Claim to claim an
available persistent volume. This example covers HostPath and
GCEPersistentDisk volumes. Choose one of the two, or see
[Types of Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes)
for more options.
### Host Path
Host paths are volumes mapped to directories on the host. **These
should be used for testing or single-node clusters only**. The data
will not be moved between nodes if the pod is recreated on a new
node. If the pod is deleted and recreated on a new node, data will be
lost.
##### SELinux
On systems supporting selinux it is preferred to leave it enabled/enforcing.
However, docker containers mount the host path with the "_svirt_sandbox_file_t_"
label type, which is incompatible with the default label type for /tmp ("_tmp_t_"),
resulting in a permissions error when the mysql container attempts to `chown`
_/var/lib/mysql_.
Therefore, on selinx systems using host path, you should pre-create the host path
directory (/tmp/data/) and change it's selinux label type to "_svirt_sandbox_file_t_",
as follows:
```shell
## on every node:
mkdir -p /tmp/data
chmod a+rwt /tmp/data # match /tmp permissions
chcon -Rt svirt_sandbox_file_t /tmp/data
```
Continuing with host path, create the persistent volume objects in Kubernetes using
[local-volumes.yaml](local-volumes.yaml):
```shell
export KUBE_REPO=https://raw.githubusercontent.com/kubernetes/kubernetes/master
kubectl create -f $KUBE_REPO/examples/mysql-wordpress-pd/local-volumes.yaml
```
### GCE Persistent Disk
This storage option is applicable if you are running on
[Google Compute Engine](http://kubernetes.io/docs/getting-started-guides/gce/).
Create two persistent disks. You will need to create the disks in the
same [GCE zone](https://cloud.google.com/compute/docs/zones) as the
Kubernetes cluster. The default setup script will create the cluster
in the `us-central1-b` zone, as seen in the
[config-default.sh](../../cluster/gce/config-default.sh) file. Replace
`<zone>` below with the appropriate zone. The names `wordpress-1` and
`wordpress-2` must match the `pdName` fields we have specified in
[gce-volumes.yaml](gce-volumes.yaml).
```shell
gcloud compute disks create --size=20GB --zone=<zone> wordpress-1
gcloud compute disks create --size=20GB --zone=<zone> wordpress-2
```
Create the persistent volume objects in Kubernetes for those disks:
```shell
export KUBE_REPO=https://raw.githubusercontent.com/kubernetes/kubernetes/master
kubectl create -f $KUBE_REPO/examples/mysql-wordpress-pd/gce-volumes.yaml
```
## Create the MySQL Password Secret
Use a [Secret](http://kubernetes.io/docs/user-guide/secrets/) object
to store the MySQL password. First create a file (in the same directory
as the wordpress sample files) called
`password.txt` and save your password in it. Make sure to not have a
trailing newline at the end of the password. The first `tr` command
will remove the newline if your editor added one. Then, create the
Secret object.
```shell
tr --delete '\n' <password.txt >.strippedpassword.txt && mv .strippedpassword.txt password.txt
kubectl create secret generic mysql-pass --from-file=password.txt
```
This secret is referenced by the MySQL and WordPress pod configuration
so that those pods will have access to it. The MySQL pod will set the
database password, and the WordPress pod will use the password to
access the database.
## Deploy MySQL
Now that the persistent disks and secrets are defined, the Kubernetes
pods can be launched. Start MySQL using
[mysql-deployment.yaml](mysql-deployment.yaml).
```shell
kubectl create -f $KUBE_REPO/examples/mysql-wordpress-pd/mysql-deployment.yaml
```
Take a look at [mysql-deployment.yaml](mysql-deployment.yaml), and
note that we've defined a volume mount for `/var/lib/mysql`, and then
created a Persistent Volume Claim that looks for a 20G volume. This
claim is satisfied by any volume that meets the requirements, in our
case one of the volumes we created above.
Also look at the `env` section and see that we specified the password
by referencing the secret `mysql-pass` that we created above. Secrets
can have multiple key:value pairs. Ours has only one key
`password.txt` which was the name of the file we used to create the
secret. The [MySQL image](https://hub.docker.com/_/mysql/) sets the
database password using the `MYSQL_ROOT_PASSWORD` environment
variable.
It may take a short period before the new pod reaches the `Running`
state. List all pods to see the status of this new pod.
```shell
kubectl get pods
```
```
NAME READY STATUS RESTARTS AGE
wordpress-mysql-cqcf4-9q8lo 1/1 Running 0 1m
```
Kubernetes logs the stderr and stdout for each pod. Take a look at the
logs for a pod by using `kubectl log`. Copy the pod name from the
`get pods` command, and then:
```shell
kubectl logs <pod-name>
```
```
...
2016-02-19 16:58:05 1 [Note] InnoDB: 128 rollback segment(s) are active.
2016-02-19 16:58:05 1 [Note] InnoDB: Waiting for purge to start
2016-02-19 16:58:05 1 [Note] InnoDB: 5.6.29 started; log sequence number 1626007
2016-02-19 16:58:05 1 [Note] Server hostname (bind-address): '*'; port: 3306
2016-02-19 16:58:05 1 [Note] IPv6 is available.
2016-02-19 16:58:05 1 [Note] - '::' resolves to '::';
2016-02-19 16:58:05 1 [Note] Server socket created on IP: '::'.
2016-02-19 16:58:05 1 [Warning] 'proxies_priv' entry '@ root@wordpress-mysql-cqcf4-9q8lo' ignored in --skip-name-resolve mode.
2016-02-19 16:58:05 1 [Note] Event Scheduler: Loaded 0 events
2016-02-19 16:58:05 1 [Note] mysqld: ready for connections.
Version: '5.6.29' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL)
```
Also in [mysql-deployment.yaml](mysql-deployment.yaml) we created a
service to allow other pods to reach this mysql instance. The name is
`wordpress-mysql` which resolves to the pod IP.
Up to this point one Deployment, one Pod, one PVC, one Service, one Endpoint,
two PVs, and one Secret have been created, shown below:
```shell
kubectl get deployment,pod,svc,endpoints,pvc -l app=wordpress -o wide && \
kubectl get secret mysql-pass && \
kubectl get pv
```
```shell
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/wordpress-mysql 1 1 1 1 3m
NAME READY STATUS RESTARTS AGE IP NODE
po/wordpress-mysql-3040864217-40soc 1/1 Running 0 3m 172.17.0.2 127.0.0.1
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
svc/wordpress-mysql None <none> 3306/TCP 3m app=wordpress,tier=mysql
NAME ENDPOINTS AGE
ep/wordpress-mysql 172.17.0.2:3306 3m
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc/mysql-pv-claim Bound local-pv-2 20Gi RWO 3m
NAME TYPE DATA AGE
mysql-pass Opaque 1 3m
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
local-pv-1 20Gi RWO Available 3m
local-pv-2 20Gi RWO Bound default/mysql-pv-claim 3m
```
## Deploy WordPress
Next deploy WordPress using
[wordpress-deployment.yaml](wordpress-deployment.yaml):
```shell
kubectl create -f $KUBE_REPO/examples/mysql-wordpress-pd/wordpress-deployment.yaml
```
Here we are using many of the same features, such as a volume claim
for persistent storage and a secret for the password.
The [WordPress image](https://hub.docker.com/_/wordpress/) accepts the
database hostname through the environment variable
`WORDPRESS_DB_HOST`. We set the env value to the name of the MySQL
service we created: `wordpress-mysql`.
The WordPress service has the setting `type: LoadBalancer`. This will
set up the wordpress service behind an external IP.
Find the external IP for your WordPress service. **It may take a minute
to have an external IP assigned to the service, depending on your
cluster environment.**
```shell
kubectl get services wordpress
```
```
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
wordpress 10.0.0.5 1.2.3.4 80/TCP 19h
```
## Visit your new WordPress blog
Now, we can visit the running WordPress app. Use the external IP of
the service that you obtained above.
```
http://<external-ip>
```
You should see the familiar WordPress init page.
![WordPress init page](WordPress.png "WordPress init page")
> Warning: Do not leave your WordPress installation on this page. If
> it is found by another user, they can set up a website on your
> instance and use it to serve potentially malicious content. You
> should either continue with the installation past the point at which
> you create your username and password, delete your instance, or set
> up a firewall to restrict access.
## Take down and restart your blog
Set up your WordPress blog and play around with it a bit. Then, take
down its pods and bring them back up again. Because you used
persistent disks, your blog state will be preserved.
All of the resources are labeled with `app=wordpress`, so you can
easily bring them down using a label selector:
```shell
kubectl delete deployment,service -l app=wordpress
kubectl delete secret mysql-pass
```
Later, re-creating the resources with the original commands will pick
up the original disks with all your data intact. Because we did not
delete the PV Claims, no other pods in the cluster could claim them
after we deleted our pods. Keeping the PV Claims also ensured
recreating the Pods did not cause the PD to switch Pods.
If you are ready to release your persistent volumes and the data on them, run:
```shell
kubectl delete pvc -l app=wordpress
```
And then delete the volume objects themselves:
```shell
kubectl delete pv local-pv-1 local-pv-2
```
or
```shell
kubectl delete pv wordpress-pv-1 wordpress-pv-2
```
## Next Steps
* [Introspection and Debugging](http://kubernetes.io/docs/user-guide/introspection-and-debugging/)
* [Jobs](http://kubernetes.io/docs/user-guide/jobs/) may be useful to run SQL queries.
* [Exec](http://kubernetes.io/docs/user-guide/getting-into-containers/)
* [Port Forwarding](http://kubernetes.io/docs/user-guide/connecting-to-applications-port-forward/)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/mysql-wordpress-pd/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/mysql-wordpress-pd/README.md](https://github.com/kubernetes/examples/blob/master/mysql-wordpress-pd/README.md)

View File

@ -1,157 +1 @@
## New Relic Server Monitoring Agent Example
This example shows how to run a New Relic server monitoring agent as a pod in a DaemonSet on an existing Kubernetes cluster.
This example will create a DaemonSet which places the New Relic monitoring agent on every node in the cluster. It's also fairly trivial to exclude specific Kubernetes nodes from the DaemonSet to just monitor specific servers.
### Step 0: Prerequisites
This process will create privileged containers which have full access to the host system for logging. Beware of the security implications of this.
If you are using a Salt based KUBERNETES\_PROVIDER (**gce**, **vagrant**, **aws**), you should make sure the creation of privileged containers via the API is enabled. Check `cluster/saltbase/pillar/privilege.sls`.
DaemonSets must be enabled on your cluster. Instructions for enabling DaemonSet can be found [here](https://kubernetes.io/docs/api.md#enabling-the-extensions-group).
### Step 1: Configure New Relic Agent
The New Relic agent is configured via environment variables. We will configure these environment variables in a sourced bash script, encode the environment file data, and store it in a secret which will be loaded at container runtime.
The [New Relic Linux Server configuration page]
(https://docs.newrelic.com/docs/servers/new-relic-servers-linux/installation-configuration/configuring-servers-linux) lists all the other settings for nrsysmond.
To create an environment variable for a setting, prepend NRSYSMOND_ to its name. For example,
```console
loglevel=debug
```
translates to
```console
NRSYSMOND_loglevel=debug
```
Edit examples/newrelic/nrconfig.env and set up the environment variables for your NewRelic agent. Be sure to edit the license key field and fill in your own New Relic license key.
Now, let's vendor the config into a secret.
```console
$ cd examples/newrelic/
$ ./config-to-secret.sh
```
<!-- BEGIN MUNGE: EXAMPLE newrelic-config-template.yaml -->
```yaml
apiVersion: v1
kind: Secret
metadata:
name: newrelic-config
type: Opaque
data:
config: {{config_data}}
```
[Download example](newrelic-config-template.yaml?raw=true)
<!-- END MUNGE: EXAMPLE newrelic-config-template.yaml -->
The script will encode the config file and write it to `newrelic-config.yaml`.
Finally, submit the config to the cluster:
```console
$ kubectl create -f examples/newrelic/newrelic-config.yaml
```
### Step 2: Create the DaemonSet definition.
The DaemonSet definition instructs Kubernetes to place a newrelic sysmond agent on each Kubernetes node.
<!-- BEGIN MUNGE: EXAMPLE newrelic-daemonset.yaml -->
```yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: newrelic-agent
labels:
tier: monitoring
app: newrelic-agent
version: v1
spec:
template:
metadata:
labels:
name: newrelic
spec:
# Filter to specific nodes:
# nodeSelector:
# app: newrelic
hostPID: true
hostIPC: true
hostNetwork: true
containers:
- resources:
requests:
cpu: 0.15
securityContext:
privileged: true
env:
- name: NRSYSMOND_logfile
value: "/var/log/nrsysmond.log"
image: newrelic/nrsysmond
name: newrelic
command: [ "bash", "-c", "source /etc/kube-newrelic/config && /usr/sbin/nrsysmond -E -F" ]
volumeMounts:
- name: newrelic-config
mountPath: /etc/kube-newrelic
readOnly: true
- name: dev
mountPath: /dev
- name: run
mountPath: /var/run/docker.sock
- name: sys
mountPath: /sys
- name: log
mountPath: /var/log
volumes:
- name: newrelic-config
secret:
secretName: newrelic-config
- name: dev
hostPath:
path: /dev
- name: run
hostPath:
path: /var/run/docker.sock
- name: sys
hostPath:
path: /sys
- name: log
hostPath:
path: /var/log
```
[Download example](newrelic-daemonset.yaml?raw=true)
<!-- END MUNGE: EXAMPLE newrelic-daemonset.yaml -->
The daemonset instructs Kubernetes to spawn pods on each node, mapping /dev/, /run/, /sys/, and /var/log to the container. It also maps the secrets we set up earlier to /etc/kube-newrelic/config, and sources them in the startup script, configuring the agent properly.
#### DaemonSet customization
- To include a custom hostname prefix (or other per-container environment variables that can be generated at run-time), you can modify the DaemonSet `command` value:
```
command: [ "bash", "-c", "source /etc/kube-newrelic/config && export NRSYSMOND_hostname=mycluster-$(hostname) && /usr/sbin/nrsysmond -E -F" ]
```
When the New Relic agent starts, `NRSYSMOND_hostname` is set using the output of `hostname` with `mycluster` prepended.
### Known issues
It's a bit cludgy to define the environment variables like we do here in these config files. There is [another issue](https://github.com/kubernetes/kubernetes/issues/4710) to discuss adding mapping secrets to environment variables in Kubernetes.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/newrelic/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/newrelic/README.md](https://github.com/kubernetes/examples/blob/master/staging/newrelic/README.md)

View File

@ -1,282 +1 @@
## Node.js and MongoDB on Kubernetes
The following document describes the deployment of a basic Node.js and MongoDB web stack on Kubernetes. Currently this example does not use replica sets for MongoDB.
For more a in-depth explanation of this example, please [read this post.](https://medium.com/google-cloud-platform-developer-advocates/running-a-mean-stack-on-google-cloud-platform-with-kubernetes-149ca81c2b5d)
### Prerequisites
This example assumes that you have a basic understanding of Kubernetes conecepts (Pods, Services, Replication Controllers), a Kubernetes cluster up and running, and that you have installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting started](https://kubernetes.io/docs/getting-started-guides/) for installation instructions for your platform.
Note: This example was tested on [Google Container Engine](https://cloud.google.com/container-engine/docs/). Some optional commands require the [Google Cloud SDK](https://cloud.google.com/sdk/).
### Creating the MongoDB Service
The first thing to do is create the MongoDB Service. This service is used by the other Pods in the cluster to find and connect to the MongoDB instance.
```yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: mongo
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
name: mongo
```
[Download file](mongo-service.yaml)
This service looks for all pods with the "mongo" tag, and creates a Service on port 27017 that targets port 27017 on the MongoDB pods. Port 27017 is the standard MongoDB port.
To start the service, run:
```sh
kubectl create -f examples/nodesjs-mongodb/mongo-service.yaml
```
### Creating the MongoDB Controller
Next, create the MongoDB instance that runs the Database. Databases also need persistent storage, which will be different for each platform.
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: mongo
name: mongo-controller
spec:
replicas: 1
template:
metadata:
labels:
name: mongo
spec:
containers:
- image: mongo
name: mongo
ports:
- name: mongo
containerPort: 27017
hostPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
gcePersistentDisk:
pdName: mongo-disk
fsType: ext4
```
[Download file](mongo-controller.yaml)
Looking at this file from the bottom up:
First, it creates a volume called "mongo-persistent-storage."
In the above example, it is using a "gcePersistentDisk" to back the storage. This is only applicable if you are running your Kubernetes cluster in Google Cloud Platform.
If you don't already have a [Google Persistent Disk](https://cloud.google.com/compute/docs/disks) created in the same zone as your cluster, create a new disk in the same Google Compute Engine / Container Engine zone as your cluster with this command:
```sh
gcloud compute disks create --size=200GB --zone=$ZONE mongo-disk
```
If you are using AWS, replace the "volumes" section with this (untested):
```yaml
volumes:
- name: mongo-persistent-storage
awsElasticBlockStore:
volumeID: aws://{region}/{volume ID}
fsType: ext4
```
If you don't have a EBS volume in the same region as your cluster, create a new EBS volume in the same region with this command (untested):
```sh
ec2-create-volume --size 200 --region $REGION --availability-zone $ZONE
```
This command will return a volume ID to use.
For other storage options (iSCSI, NFS, OpenStack), please follow the documentation.
Now that the volume is created and usable by Kubernetes, the next step is to create the Pod.
Looking at the container section: It uses the official MongoDB container, names itself "mongo", opens up port 27017, and mounts the disk to "/data/db" (where the mongo container expects the data to be).
Now looking at the rest of the file, it is creating a Replication Controller with one replica, called mongo-controller. It is important to use a Replication Controller and not just a Pod, as a Replication Controller will restart the instance in case it crashes.
Create this controller with this command:
```sh
kubectl create -f examples/nodesjs-mongodb/mongo-controller.yaml
```
At this point, MongoDB is up and running.
Note: There is no password protection or auth running on the database by default. Please keep this in mind!
### Creating the Node.js Service
The next step is to create the Node.js service. This service is what will be the endpoint for the web site, and will load balance requests to the Node.js instances.
```yaml
apiVersion: v1
kind: Service
metadata:
name: web
labels:
name: web
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
protocol: TCP
selector:
name: web
```
[Download file](web-service.yaml)
This service is called "web," and it uses a [LoadBalancer](https://kubernetes.io/docs/user-guide/services.md#type-loadbalancer) to distribute traffic on port 80 to port 3000 running on Pods with the "web" tag. Port 80 is the standard HTTP port, and port 3000 is the standard Node.js port.
On Google Container Engine, a [network load balancer](https://cloud.google.com/compute/docs/load-balancing/network/) and [firewall rule](https://cloud.google.com/compute/docs/networking#addingafirewall) to allow traffic are automatically created.
To start the service, run:
```sh
kubectl create -f examples/nodesjs-mongodb/web-service.yaml
```
If you are running on a platform that does not support LoadBalancer (i.e Bare Metal), you need to use a [NodePort](https://kubernetes.io/docs/user-guide/services.md#type-nodeport) with your own load balancer.
You may also need to open appropriate Firewall ports to allow traffic.
### Creating the Node.js Controller
The final step is deploying the Node.js container that will run the application code. This container can easily by replaced by any other web serving frontend, such as Rails, LAMP, Java, Go, etc.
The most important thing to keep in mind is how to access the MongoDB service.
If you were running MongoDB and Node.js on the same server, you would access MongoDB like so:
```javascript
MongoClient.connect('mongodb://localhost:27017/database-name', function(err, db) { console.log(db); });
```
With this Kubernetes setup, that line of code would become:
```javascript
MongoClient.connect('mongodb://mongo:27017/database-name', function(err, db) { console.log(db); });
```
The MongoDB Service previously created tells Kubernetes to configure the cluster so 'mongo' points to the MongoDB instance created earlier.
#### Custom Container
You should have your own container that runs your Node.js code hosted in a container registry.
See [this example](https://medium.com/google-cloud-platform-developer-advocates/running-a-mean-stack-on-google-cloud-platform-with-kubernetes-149ca81c2b5d#8edc) to see how to make your own Node.js container.
Once you have created your container, create the web controller.
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: web
name: web-controller
spec:
replicas: 2
selector:
name: web
template:
metadata:
labels:
name: web
spec:
containers:
- image: <YOUR-CONTAINER>
name: web
ports:
- containerPort: 3000
name: http-server
```
[Download file](web-controller.yaml)
Replace <YOUR-CONTAINER> with the url of your container.
This Controller will create two replicas of the Node.js container, and each Node.js container will have the tag "web" and expose port 3000. The Service LoadBalancer will forward port 80 traffic to port 3000 automatically, along with load balancing traffic between the two instances.
To start the Controller, run:
```sh
kubectl create -f examples/nodesjs-mongodb/web-controller.yaml
```
#### Demo Container
If you DON'T want to create a custom container, you can use the following YAML file:
Note: You cannot run both Controllers at the same time, as they both try to control the same Pods.
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: web
name: web-controller
spec:
replicas: 2
selector:
name: web
template:
metadata:
labels:
name: web
spec:
containers:
- image: node:0.10.40
command: ['/bin/sh', '-c']
args: ['cd /home && git clone https://github.com/ijason/NodeJS-Sample-App.git demo && cd demo/EmployeeDB/ && npm install && sed -i -- ''s/localhost/mongo/g'' app.js && node app.js']
name: web
ports:
- containerPort: 3000
name: http-server
```
[Download file](web-controller-demo.yaml)
This will use the default Node.js container, and will pull and execute code at run time. This is not recommended; typically, your code should be part of the container.
To start the Controller, run:
```sh
kubectl create -f examples/nodesjs-mongodb/web-controller-demo.yaml
```
### Testing it out
Now that all the components are running, visit the IP address of the load balancer to access the website.
With Google Cloud Platform, get the IP address of all load balancers with the following command:
```sh
gcloud compute forwarding-rules list
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/nodesjs-mongodb/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/nodesjs-mongodb/README.md](https://github.com/kubernetes/examples/blob/master/staging/nodesjs-mongodb/README.md)

View File

@ -1,61 +1 @@
# Microsoft Operations Management Suite (OMS) Container Monitoring Example
The [Microsoft Operations Management Suite (OMS)](https://www.microsoft.com/en-us/cloud-platform/operations-management-suite) is a software-as-a-service offering from Microsoft that allows Enterprise IT to manage any hybrid cloud.
This example will create a DaemonSet to deploy the OMS Linux agents running as containers to every node in the Kubernetes cluster.
### Supported Linux Operating Systems & Docker
- Docker 1.10 thru 1.12.1
- An x64 version of the following:
- Ubuntu 14.04 LTS, 16.04 LTS
- CoreOS (stable)
- Amazon Linux 2016.09.0
- openSUSE 13.2
- CentOS 7
- SLES 12
- RHEL 7.2
## Step 1
If you already have a Microsoft Azure account, you can quickly create a free OMS account by following the steps [here](https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-get-started#sign-up-quickly-using-microsoft-azure).
If you don't have a Microsoft Azure account, you can create a free OMS account by following the guide [here](https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-get-started#sign-up-in-3-steps-using-oms).
## Step 2
You will need to edit the [omsagent-daemonset.yaml](./omsagent-daemonset.yaml) file to add your Workspace ID and Primary Key of your OMS account.
```
- env:
- name: WSID
value: <your workspace ID>
- name: KEY
value: <your key>
```
The Workspace ID and Primary Key can be found inside the OMS Portal under Settings in the connected sources tab (see below screenshot).
![connected-resources](./images/connected-resources.png)
## Step 3
Run the following command to deploy the OMS agent to your Kubernetes nodes:
```
kubectl -f omsagent-daemonset.yaml
```
## Step 4
Add the Container solution to your OMS workspace:
1. Log in to the OMS portal.
2. Click the Solutions Gallery tile.
3. On the OMS Solutions Gallery page, click on Containers.
4. On the page for the Containers solution, detailed information about the solution is displayed. Click Add.
A new tile for the Container solution that you added appears on the Overview page in OMS. It would take 5 minutes for your data to appear in OMS.
![oms-portal](./images/oms-portal.png)
![coms-container-solution](./images/oms-container-solution.png)
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/oms/README.md](https://github.com/kubernetes/examples/blob/master/staging/oms/README.md)

View File

@ -1,211 +1 @@
## OpenShift Origin example
This example shows how to run OpenShift Origin as a pod on an existing Kubernetes cluster.
OpenShift Origin runs with a rich set of role based policy rules out of the box that requires authentication from users via certificates. When run as a pod on an existing Kubernetes cluster, it proxies access to the underlying Kubernetes services to provide security.
As a result, this example is a complex end-to-end configuration that shows how to configure certificates for a service that runs on Kubernetes, and requires a number of configuration files to be injected dynamically via a secret volume to the pod.
This example will create a pod running the OpenShift Origin master. In addition, it will run a three-pod etcd setup to hold OpenShift content. OpenShift embeds Kubernetes in the stand-alone setup, so the configuration for OpenShift when it is running against an external Kubernetes cluster is different: content specific to Kubernetes will be stored in the Kubernetes etcd repository (i.e. pods, services, replication controllers, etc.), but OpenShift specific content (builds, images, users, policies, etc.) are stored in its etcd setup.
### Step 0: Prerequisites
This example assumes that you have an understanding of Kubernetes and that you have forked the repository.
OpenShift Origin creates privileged containers when running Docker builds during the source-to-image process.
If you are using a Salt based KUBERNETES_PROVIDER (**gce**, **vagrant**, **aws**), you should enable the
ability to create privileged containers via the API.
```sh
$ cd kubernetes
$ vi cluster/saltbase/pillar/privilege.sls
# If true, allow privileged containers to be created by API
allow_privileged: true
```
Now spin up a cluster using your preferred KUBERNETES_PROVIDER. Remember that `kube-up.sh` may start other pods on your nodes, so ensure that you have enough resources to run the five pods for this example.
```sh
$ export KUBERNETES_PROVIDER=${YOUR_PROVIDER}
$ cluster/kube-up.sh
```
Next, let's setup some variables, and create a local folder that will hold generated configuration files.
```sh
$ export OPENSHIFT_EXAMPLE=$(pwd)/examples/openshift-origin
$ export OPENSHIFT_CONFIG=${OPENSHIFT_EXAMPLE}/config
$ mkdir ${OPENSHIFT_CONFIG}
$ export ETCD_INITIAL_CLUSTER_TOKEN=$(python -c "import string; import random; print(''.join(random.SystemRandom().choice(string.ascii_lowercase + string.digits) for _ in range(40)))")
$ export ETCD_DISCOVERY_TOKEN=$(python -c "import string; import random; print(\"etcd-cluster-\" + ''.join(random.SystemRandom().choice(string.ascii_lowercase + string.digits) for _ in range(5)))")
$ sed -i.bak -e "s/INSERT_ETCD_INITIAL_CLUSTER_TOKEN/\"${ETCD_INITIAL_CLUSTER_TOKEN}\"/g" -e "s/INSERT_ETCD_DISCOVERY_TOKEN/\"${ETCD_DISCOVERY_TOKEN}\"/g" ${OPENSHIFT_EXAMPLE}/etcd-controller.yaml
```
This will have created a `etcd-controller.yaml.bak` file in your directory, which you should remember to restore when doing cleanup (or use the given `cleanup.sh`). Finally, let's start up the external etcd pods and the discovery service necessary for their initialization:
```sh
$ kubectl create -f examples/openshift-origin/openshift-origin-namespace.yaml
$ kubectl create -f examples/openshift-origin/etcd-discovery-controller.yaml --namespace="openshift-origin"
$ kubectl create -f examples/openshift-origin/etcd-discovery-service.yaml --namespace="openshift-origin"
$ kubectl create -f examples/openshift-origin/etcd-controller.yaml --namespace="openshift-origin"
$ kubectl create -f examples/openshift-origin/etcd-service.yaml --namespace="openshift-origin"
```
### Step 1: Export your Kubernetes configuration file for use by OpenShift pod
OpenShift Origin uses a configuration file to know how to access your Kubernetes cluster with administrative authority.
```
$ cluster/kubectl.sh config view --output=yaml --flatten=true --minify=true > ${OPENSHIFT_CONFIG}/kubeconfig
```
The output from this command will contain a single file that has all the required information needed to connect to your Kubernetes cluster that you previously provisioned. This file should be considered sensitive, so do not share this file with untrusted parties.
We will later use this file to tell OpenShift how to bootstrap its own configuration.
### Step 2: Create an External Load Balancer to Route Traffic to OpenShift
An external load balancer is needed to route traffic to our OpenShift master service that will run as a pod on your Kubernetes cluster.
```sh
$ cluster/kubectl.sh create -f $OPENSHIFT_EXAMPLE/openshift-service.yaml --namespace="openshift-origin"
```
### Step 3: Generate configuration file for your OpenShift master pod
The OpenShift master requires a configuration file as input to know how to bootstrap the system.
In order to build this configuration file, we need to know the public IP address of our external load balancer in order to build default certificates.
Grab the public IP address of the service we previously created: the two-line script below will attempt to do so, but make sure to check that the IP was set as a result - if it was not, try again after a couple seconds.
```sh
$ export PUBLIC_OPENSHIFT_IP=$(kubectl get services openshift --namespace="openshift-origin" --template="{{ index .status.loadBalancer.ingress 0 \"ip\" }}")
$ echo ${PUBLIC_OPENSHIFT_IP}
```
You can automate the process with the following script, as it might take more than a minute for the IP to be set and discoverable.
```shell
$ while [ ${#PUBLIC_OPENSHIFT_IP} -lt 1 ]; do
echo -n .
sleep 1
{
export PUBLIC_OPENSHIFT_IP=$(kubectl get services openshift --namespace="openshift-origin" --template="{{ index .status.loadBalancer.ingress 0 \"ip\" }}")
} 2> ${OPENSHIFT_EXAMPLE}/openshift-startup.log
if [[ ! ${PUBLIC_OPENSHIFT_IP} =~ ^([0-9]{1,3}\.){3}[0-9]{1,3}$ ]]; then
export PUBLIC_OPENSHIFT_IP=""
fi
done
$ echo
$ echo "Public OpenShift IP set to: ${PUBLIC_OPENSHIFT_IP}"
```
Ensure you have a valid PUBLIC_IP address before continuing in the example.
We now need to run a command on your host to generate a proper OpenShift configuration. To do this, we will volume mount the configuration directory that holds your Kubernetes kubeconfig file from the prior step.
```sh
$ docker run --privileged -v ${OPENSHIFT_CONFIG}:/config openshift/origin start master --write-config=/config --kubeconfig=/config/kubeconfig --master=https://localhost:8443 --public-master=https://${PUBLIC_OPENSHIFT_IP}:8443 --etcd=http://etcd:2379
```
You should now see a number of certificates minted in your configuration directory, as well as a master-config.yaml file that tells the OpenShift master how to execute. We need to make some adjustments to this configuration directory in order to allow the OpenShift cluster to use Kubernetes serviceaccounts. First, write the Kubernetes service account key to the `${OPENSHIFT_CONFIG}` directory. The following script assumes you are using GCE. If you are not, use `scp` or `ssh` to get the key from the master node running Kubernetes. It is usually located at `/srv/kubernetes/server.key`.
```shell
$ export ZONE=$(gcloud compute instances list | grep "${KUBE_GCE_INSTANCE_PREFIX}\-master" | awk '{print $2}' | head -1)
$ echo "sudo cat /srv/kubernetes/server.key; exit;" | gcloud compute ssh ${KUBE_GCE_INSTANCE_PREFIX}-master --zone ${ZONE} | grep -Ex "(^\-.*\-$|^\S+$)" > ${OPENSHIFT_CONFIG}/serviceaccounts.private.key
```
Although we are retrieving the private key from the Kubernetes master, OpenShift will take care of the conversion for us so that serviceaccounts are created with the public key. Edit your `master-config.yaml` file in the `${OPENSHIFT_CONFIG}` directory to add `serviceaccounts.private.key` to the list of `publicKeyFiles`:
```shell
$ sed -i -e 's/publicKeyFiles:.*$/publicKeyFiles:/g' -e '/publicKeyFiles:/a \ \ - serviceaccounts.private.key' ${OPENSHIFT_CONFIG}/master-config.yaml
```
Now, the configuration files are complete. In the next step, we will bundle the resulting configuration into a Kubernetes Secret that our OpenShift master pod will consume.
### Step 4: Bundle the configuration into a Secret
We now need to bundle the contents of our configuration into a secret for use by our OpenShift master pod.
OpenShift includes an experimental command to make this easier.
First, update the ownership for the files previously generated:
```
$ sudo -E chown -R ${USER} ${OPENSHIFT_CONFIG}
```
Then run the following command to collapse them into a Kubernetes secret.
```sh
$ docker run -it --privileged -e="KUBECONFIG=/config/admin.kubeconfig" -v ${OPENSHIFT_CONFIG}:/config openshift/origin cli secrets new openshift-config /config -o json &> examples/openshift-origin/secret.json
```
Now, lets create the secret in your Kubernetes cluster.
```sh
$ cluster/kubectl.sh create -f examples/openshift-origin/secret.json --namespace="openshift-origin"
```
**NOTE: This secret is secret and should not be shared with untrusted parties.**
### Step 5: Deploy OpenShift Master
We are now ready to deploy OpenShift.
We will deploy a pod that runs the OpenShift master. The OpenShift master will delegate to the underlying Kubernetes
system to manage Kubernetes specific resources. For the sake of simplicity, the OpenShift master will run with an embedded etcd to hold OpenShift specific content. This demonstration will evolve in the future to show how to run etcd in a pod so that content is not destroyed if the OpenShift master fails.
```sh
$ cluster/kubectl.sh create -f ${OPENSHIFT_EXAMPLE}/openshift-controller.yaml --namespace="openshift-origin"
```
You should now get a pod provisioned whose name begins with openshift.
```sh
$ cluster/kubectl.sh get pods | grep openshift
$ cluster/kubectl.sh log openshift-t7147 origin
Running: cluster/../cluster/gce/../../cluster/../_output/dockerized/bin/linux/amd64/kubectl logs openshift-t7t47 origin
2015-04-30T15:26:00.454146869Z I0430 15:26:00.454005 1 start_master.go:296] Starting an OpenShift master, reachable at 0.0.0.0:8443 (etcd: [https://10.0.27.2:4001])
2015-04-30T15:26:00.454231211Z I0430 15:26:00.454223 1 start_master.go:297] OpenShift master public address is https://104.197.73.241:8443
```
Depending upon your cloud provider, you may need to open up an external firewall rule for tcp:8443. For GCE, you can run the following:
```sh
$ gcloud compute --project "your-project" firewall-rules create "origin" --allow tcp:8443 --network "your-network" --source-ranges "0.0.0.0/0"
```
Consult your cloud provider's documentation for more information.
Open a browser and visit the OpenShift master public address reported in your log.
You can use the CLI commands by running the following:
```sh
$ docker run --privileged --entrypoint="/usr/bin/bash" -it -e="OPENSHIFTCONFIG=/config/admin.kubeconfig" -v ${OPENSHIFT_CONFIG}:/config openshift/origin
$ osc config use-context public-default
$ osc --help
```
## Cleanup
Clean up your cluster from resources created with this example:
```sh
$ ${OPENSHIFT_EXAMPLE}/cleanup.sh
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/openshift-origin/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/openshift-origin/README.md](https://github.com/kubernetes/examples/blob/master/staging/openshift-origin/README.md)

View File

@ -1,522 +1 @@
## Persistent Volume Provisioning
This example shows how to use dynamic persistent volume provisioning.
### Prerequisites
This example assumes that you have an understanding of Kubernetes administration and can modify the
scripts that launch kube-controller-manager.
### Admin Configuration
The admin must define `StorageClass` objects that describe named "classes" of storage offered in a cluster. Different classes might map to arbitrary levels or policies determined by the admin. When configuring a `StorageClass` object for persistent volume provisioning, the admin will need to describe the type of provisioner to use and the parameters that will be used by the provisioner when it provisions a `PersistentVolume` belonging to the class.
The name of a StorageClass object is significant, and is how users can request a particular class, by specifying the name in their `PersistentVolumeClaim`. The `provisioner` field must be specified as it determines what volume plugin is used for provisioning PVs. The `parameters` field contains the parameters that describe volumes belonging to the storage class. Different parameters may be accepted depending on the `provisioner`. For example, the value `io1`, for the parameter `type`, and the parameter `iopsPerGB` are specific to EBS . When a parameter is omitted, some default is used.
See [Kubernetes StorageClass documentation](https://kubernetes.io/docs/user-guide/persistent-volumes/#storageclasses) for complete reference of all supported parameters.
#### AWS
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/aws-ebs
parameters:
type: io1
zones: us-east-1d, us-east-1c
iopsPerGB: "10"
```
* `type`: `io1`, `gp2`, `sc1`, `st1`. See AWS docs for details. Default: `gp2`.
* `zone`: AWS zone. If neither zone nor zones is specified, volumes are generally round-robin-ed across all active zones where Kubernetes cluster has a node. Note: zone and zones parameters must not be used at the same time.
* `zones`: a comma separated list of AWS zone(s). If neither zone nor zones is specified, volumes are generally round-robin-ed across all active zones where Kubernetes cluster has a node. Note: zone and zones parameters must not be used at the same time.
* `iopsPerGB`: only for `io1` volumes. I/O operations per second per GiB. AWS volume plugin multiplies this with size of requested volume to compute IOPS of the volume and caps it at 20 000 IOPS (maximum supported by AWS, see AWS docs).
* `encrypted`: denotes whether the EBS volume should be encrypted or not. Valid values are `true` or `false`.
* `kmsKeyId`: optional. The full Amazon Resource Name of the key to use when encrypting the volume. If none is supplied but `encrypted` is true, a key is generated by AWS. See AWS docs for valid ARN value.
#### GCE
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zones: us-central1-a, us-central1-b
```
* `type`: `pd-standard` or `pd-ssd`. Default: `pd-ssd`
* `zone`: GCE zone. If neither zone nor zones is specified, volumes are generally round-robin-ed across all active zones where Kubernetes cluster has a node. Note: zone and zones parameters must not be used at the same time.
* `zones`: a comma separated list of GCE zone(s). If neither zone nor zones is specified, volumes are generally round-robin-ed across all active zones where Kubernetes cluster has a node. Note: zone and zones parameters must not be used at the same time.
#### vSphere
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: eagerzeroedthick
fstype: ext3
```
* `diskformat`: `thin`, `zeroedthick` and `eagerzeroedthick`. See vSphere docs for details. Default: `"thin"`.
* `fstype`: fstype that are supported by kubernetes. Default: `"ext4"`.
#### Portworx Volume
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: portworx-io-priority-high
provisioner: kubernetes.io/portworx-volume
parameters:
repl: "1"
snap_interval: "70"
io_priority: "high"
```
* `fs`: filesystem to be laid out: [none/xfs/ext4] (default: `ext4`)
* `block_size`: block size in Kbytes (default: `32`)
* `repl`: replication factor [1..3] (default: `1`)
* `io_priority`: IO Priority: [high/medium/low] (default: `low`)
* `snap_interval`: snapshot interval in minutes, 0 disables snaps (default: `0`)
* `aggregation_level`: specifies the number of chunks the volume would be distributed into, 0 indicates a non-aggregated volume (default: `0`)
* `ephemeral`: ephemeral storage [true/false] (default `false`)
For a complete example refer ([Portworx Volume docs](../volumes/portworx/README.md))
#### StorageOS
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: sc-fast
provisioner: kubernetes.io/storageos
parameters:
pool: default
description: Kubernetes volume
fsType: ext4
adminSecretNamespace: default
adminSecretName: storageos-secret
```
* `pool`: The name of the StorageOS distributed capacity pool to provision the volume from. Uses the `default` pool which is normally present if not specified.
* `description`: The description to assign to volumes that were created dynamically. All volume descriptions will be the same for the storage class, but different storage classes can be used to allow descriptions for different use cases. Defaults to `Kubernetes volume`.
* `fsType`: The default filesystem type to request. Note that user-defined rules within StorageOS may override this value. Defaults to `ext4`.
* `adminSecretNamespace`: The namespace where the API configuration secret is located. Required if adminSecretName set.
* `adminSecretName`: The name of the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted.
For a complete example refer to the ([StorageOS example](../../volumes/storageos/README.md))
#### GLUSTERFS
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: slow
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://127.0.0.1:8081"
clusterid: "630372ccdc720a92c681fb928f27b53f"
restuser: "admin"
secretNamespace: "default"
secretName: "heketi-secret"
gidMin: "40000"
gidMax: "50000"
volumetype: "replicate:3"
```
Example storageclass can be found in [glusterfs-storageclass.yaml](glusterfs/glusterfs-storageclass.yaml).
* `resturl` : Gluster REST service/Heketi service url which provision gluster volumes on demand. The general format should be `IPaddress:Port` and this is a mandatory parameter for GlusterFS dynamic provisioner. If Heketi service is exposed as a routable service in openshift/kubernetes setup, this can have a format similar to
`http://heketi-storage-project.cloudapps.mystorage.com` where the fqdn is a resolvable heketi service url.
* `restauthenabled` : Gluster REST service authentication boolean that enables authentication to the REST server. If this value is 'true', `restuser` and `restuserkey` or `secretNamespace` + `secretName` have to be filled. This option is deprecated, authentication is enabled when any of `restuser`, `restuserkey`, `secretName` or `secretNamespace` is specified.
* `restuser` : Gluster REST service/Heketi user who has access to create volumes in the Gluster Trusted Pool.
* `restuserkey` : Gluster REST service/Heketi user's password which will be used for authentication to the REST server. This parameter is deprecated in favor of `secretNamespace` + `secretName`.
* `secretNamespace` + `secretName` : Identification of Secret instance that contains user password to use when talking to Gluster REST service. These parameters are optional, empty password will be used when both `secretNamespace` and `secretName` are omitted. The provided secret must have type "kubernetes.io/glusterfs".
When both `restuserkey` and `secretNamespace` + `secretName` is specified, the secret will be used.
* `clusterid`: `630372ccdc720a92c681fb928f27b53f` is the ID of the cluster which will be used by Heketi when provisioning the volume. It can also be a list of clusterids, for ex:
"8452344e2becec931ece4e33c4674e4e,42982310de6c63381718ccfa6d8cf397". This is an optional parameter.
Example of a secret can be found in [glusterfs-secret.yaml](glusterfs/glusterfs-secret.yaml).
* `gidMin` + `gidMax` : The minimum and maximum value of GID range for the storage class. A unique value (GID) in this range ( gidMin-gidMax ) will be used for dynamically provisioned volumes. These are optional values. If not specified, the volume will be provisioned with a value between 2000-2147483647 which are defaults for gidMin and gidMax respectively.
* `volumetype` : The volume type and its parameters can be configured with this optional value. If the volume type is not mentioned, it's up to the provisioner to decide the volume type.
For example:
'Replica volume':
`volumetype: replicate:3` where '3' is replica count.
'Disperse/EC volume':
`volumetype: disperse:4:2` where '4' is data and '2' is the redundancy count.
'Distribute volume':
`volumetype: none`
For available volume types and its administration options refer: ([Administration Guide](https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/part-Overview.html))
Reference : ([How to configure Gluster on Kubernetes](https://github.com/gluster/gluster-kubernetes/blob/master/docs/setup-guide.md))
Reference : ([How to configure Heketi](https://github.com/heketi/heketi/wiki/Setting-up-the-topology))
When the persistent volumes are dynamically provisioned, the Gluster plugin automatically create an endpoint and a headless service in the name `gluster-dynamic-<claimname>`. This dynamic endpoint and service will be deleted automatically when the persistent volume claim is deleted.
#### OpenStack Cinder
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gold
provisioner: kubernetes.io/cinder
parameters:
type: fast
availability: nova
```
* `type`: [VolumeType](http://docs.openstack.org/admin-guide/dashboard-manage-volumes.html) created in Cinder. Default is empty.
* `availability`: Availability Zone. Default is empty.
#### Ceph RBD
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/rbd
parameters:
monitors: 10.16.153.105:6789
adminId: kube
adminSecretName: ceph-secret
adminSecretNamespace: kube-system
pool: kube
userId: kube
userSecretName: ceph-secret-user
imageFormat: "1"
```
* `monitors`: Ceph monitors, comma delimited. It is required.
* `adminId`: Ceph client ID that is capable of creating images in the pool. Default is "admin".
* `adminSecret`: Secret Name for `adminId`. It is required. The provided secret must have type "kubernetes.io/rbd".
* `adminSecretNamespace`: The namespace for `adminSecret`. Default is "default".
* `pool`: Ceph RBD pool. Default is "rbd".
* `userId`: Ceph client ID that is used to map the RBD image. Default is the same as `adminId`.
* `userSecretName`: The name of Ceph Secret for `userId` to map RBD image. It must exist in the same namespace as PVCs. It is required.
* `imageFormat`: Ceph RBD image format, "1" or "2". Default is "1".
* `imageFeatures`: Ceph RBD image format 2 features, comma delimited. This is optional, and only be used if you set `imageFormat` to "2". Currently supported features are `layering` only. Default is "", no features is turned on.
NOTE: We cannot turn on `exclusive-lock` feature for now (and `object-map`, `fast-diff`, `journaling` which require `exclusive-lock`), because exclusive lock and advisory lock cannot work together. (See [#45805](https://issue.k8s.io/45805))
#### Quobyte
<!-- BEGIN MUNGE: EXAMPLE quobyte/quobyte-storage-class.yaml -->
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: slow
provisioner: kubernetes.io/quobyte
parameters:
quobyteAPIServer: "http://138.68.74.142:7860"
registry: "138.68.74.142:7861"
adminSecretName: "quobyte-admin-secret"
adminSecretNamespace: "kube-system"
user: "root"
group: "root"
quobyteConfig: "BASE"
quobyteTenant: "DEFAULT"
```
[Download example](quobyte/quobyte-storage-class.yaml?raw=true)
<!-- END MUNGE: EXAMPLE quobyte/quobyte-storage-class.yaml -->
* **quobyteAPIServer** API Server of Quobyte in the format http(s)://api-server:7860
* **registry** Quobyte registry to use to mount the volume. You can specify the registry as <host>:<port> pair or if you want to specify multiple registries you just have to put a comma between them e.q. <host1>:<port>,<host2>:<port>,<host3>:<port>. The host can be an IP address or if you have a working DNS you can also provide the DNS names.
* **adminSecretName** secret that holds information about the Quobyte user and the password to authenticate against the API server. The provided secret must have type "kubernetes.io/quobyte".
* **adminSecretNamespace** The namespace for **adminSecretName**. Default is `default`.
* **user** maps all access to this user. Default is `root`.
* **group** maps all access to this group. Default is `nfsnobody`.
* **quobyteConfig** use the specified configuration to create the volume. You can create a new configuration or modify an existing one with the Web console or the quobyte CLI. Default is `BASE`
* **quobyteTenant** use the specified tenant ID to create/delete the volume. This Quobyte tenant has to be already present in Quobyte. For Quobyte < 1.4 use an empty string `""` as `DEFAULT` tenant. Default is `DEFAULT`
* **createQuota** if set all volumes created by this storage class will get a Quota for the specified size. The quota is set for the logical disk size (which can differ from the physical size e.q. if replication is used). Default is ``False
First create Quobyte admin's Secret in the system namespace. Here the Secret is created in `kube-system`:
```
$ kubectl create -f examples/persistent-volume-provisioning/quobyte/quobyte-admin-secret.yaml --namespace=kube-system
```
Then create the Quobyte storage class:
```
$ kubectl create -f examples/persistent-volume-provisioning/quobyte/quobyte-storage-class.yaml
```
Now create a PVC
```
$ kubectl create -f examples/persistent-volume-provisioning/claim1.json
```
Check the created PVC:
```
$ kubectl describe pvc
Name: claim1
Namespace: default
Status: Bound
Volume: pvc-bdb82652-694a-11e6-b811-080027242396
Labels: <none>
Capacity: 3Gi
Access Modes: RWO
No events.
$ kubectl describe pv
Name: pvc-bdb82652-694a-11e6-b811-080027242396
Labels: <none>
Status: Bound
Claim: default/claim1
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 3Gi
Message:
Source:
Type: Quobyte (a Quobyte mount on the host that shares a pod's lifetime)
Registry: 138.68.79.14:7861
Volume: kubernetes-dynamic-pvc-bdb97c58-694a-11e6-91b6-080027242396
ReadOnly: false
No events.
```
Create a Pod to use the PVC:
```
$ kubectl create -f examples/persistent-volume-provisioning/quobyte/example-pod.yaml
```
#### <a name="azure-disk">Azure Disk</a>
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/azure-disk
parameters:
skuName: Standard_LRS
location: eastus
storageAccount: azure_storage_account_name
```
* `skuName`: Azure storage account Sku tier. Default is empty.
* `location`: Azure storage account location. Default is empty.
* `storageAccount`: Azure storage account name. If storage account is not provided, all storage accounts associated with the resource group are searched to find one that matches `skuName` and `location`. If storage account is provided, it must reside in the same resource group as the cluster, and `skuName` and `location` are ignored.
#### Azure File
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/azure-file
parameters:
skuName: Standard_LRS
location: eastus
storageAccount: azure_storage_account_name
```
The parameters are the same as those used by [Azure Disk](#azure-disk)
### User provisioning requests
Users request dynamically provisioned storage by including a storage class in their `PersistentVolumeClaim` using `spec.storageClassName` attribute.
It is required that this value matches the name of a `StorageClass` configured by the administrator.
```
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "claim1"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "3Gi"
}
},
"storageClassName": "slow"
}
}
```
### Sample output
#### GCE
This example uses GCE but any provisioner would follow the same flow.
First we note there are no Persistent Volumes in the cluster. After creating a storage class and a claim including that storage class, we see a new PV is created
and automatically bound to the claim requesting storage.
```
$ kubectl get pv
$ kubectl create -f examples/persistent-volume-provisioning/gce-pd.yaml
storageclass "slow" created
$ kubectl create -f examples/persistent-volume-provisioning/claim1.json
persistentvolumeclaim "claim1" created
$ kubectl get pv
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pvc-bb6d2f0c-534c-11e6-9348-42010af00002 3Gi RWO Bound default/claim1 4s
$ kubectl get pvc
NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
claim1 <none> Bound pvc-bb6d2f0c-534c-11e6-9348-42010af00002 3Gi RWO 7s
# delete the claim to release the volume
$ kubectl delete pvc claim1
persistentvolumeclaim "claim1" deleted
# the volume is deleted in response to being release of its claim
$ kubectl get pv
```
#### Ceph RBD
This section will guide you on how to configure and use the Ceph RBD provisioner.
##### Pre-requisites
For this to work you must have a functional Ceph cluster, and the `rbd` command line utility must be installed on any host/container that `kube-controller-manager` or `kubelet` is running on.
##### Configuration
First we must identify the Ceph client admin key. This is usually found in `/etc/ceph/ceph.client.admin.keyring` on your Ceph cluster nodes. The file will look something like this:
```
[client.admin]
key = AQBfxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==
auid = 0
caps mds = "allow"
caps mon = "allow *"
caps osd = "allow *"
```
From the key value, we will create a secret. We must create the Ceph admin Secret in the namespace defined in our `StorageClass`. In this example we've set the namespace to `kube-system`.
```
$ kubectl create secret generic ceph-secret-admin --from-literal=key='AQBfxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==' --namespace=kube-system --type=kubernetes.io/rbd
```
Now modify `examples/persistent-volume-provisioning/rbd/rbd-storage-class.yaml` to reflect your environment, particularly the `monitors` field. We are now ready to create our RBD Storage Class:
```
$ kubectl create -f examples/persistent-volume-provisioning/rbd/rbd-storage-class.yaml
```
The kube-controller-manager is now able to provision storage, however we still need to be able to map the RBD volume to a node. Mapping should be done with a non-privileged key, if you have existing users you can get all keys by running `ceph auth list` on your Ceph cluster with the admin key. For this example we will create a new user and pool.
```
$ ceph osd pool create kube 512
$ ceph auth get-or-create client.kube mon 'allow r' osd 'allow rwx pool=kube'
[client.kube]
key = AQBQyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy==
```
This key will be made into a secret, just like the admin secret. However this user secret will need to be created in every namespace where you intend to consume RBD volumes provisioned in our example storage class. Let's create a namespace called `myns`, and create the user secret in that namespace.
```
kubectl create namespace myns
kubectl create secret generic ceph-secret-user --from-literal=key='AQBQyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy==' --namespace=myns --type=kubernetes.io/rbd
```
You are now ready to provision and use RBD storage.
##### Usage
With the storageclass configured, let's create a PVC in our example namespace, `myns`:
```
$ kubectl create -f examples/persistent-volume-provisioning/claim1.json --namespace=myns
```
Eventually the PVC creation will result in a PV and RBD volume to match:
```
$ kubectl describe pvc --namespace=myns
Name: claim1
Namespace: myns
Status: Bound
Volume: pvc-1cfa23b3-664b-11e6-9eb9-90b11c09520d
Labels: <none>
Capacity: 3Gi
Access Modes: RWO
No events.
$ kubectl describe pv
Name: pvc-1cfa23b3-664b-11e6-9eb9-90b11c09520d
Labels: <none>
Status: Bound
Claim: myns/claim1
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 3Gi
Message:
Source:
Type: RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
CephMonitors: [127.0.0.1:6789]
RBDImage: kubernetes-dynamic-pvc-1cfb1862-664b-11e6-9a5d-90b11c09520d
FSType:
RBDPool: kube
RadosUser: kube
Keyring: /etc/ceph/keyring
SecretRef: &{ceph-secret-user}
ReadOnly: false
No events.
```
With our storage provisioned, we can now create a Pod to use the PVC:
```
$ kubectl create -f examples/persistent-volume-provisioning/rbd/pod.yaml --namespace=myns
```
Now our pod has an RBD mount!
```
$ export PODNAME=`kubectl get pod --selector='role=server' --namespace=myns --output=template --template="{{with index .items 0}}{{.metadata.name}}{{end}}"`
$ kubectl exec -it $PODNAME --namespace=myns -- df -h | grep rbd
/dev/rbd1 2.9G 4.5M 2.8G 1% /var/lib/www/html
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/persistent-volume-provisioning/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/persistent-volume-provisioning/README.md](https://github.com/kubernetes/examples/blob/master/staging/persistent-volume-provisioning/README.md)

View File

@ -1,216 +1 @@
## Phabricator example
This example shows how to build a simple multi-tier web application using Kubernetes and Docker.
The example combines a web frontend and an external service that provides MySQL database. We use CloudSQL on Google Cloud Platform in this example, but in principle any approach to running MySQL should work.
### Step Zero: Prerequisites
This example assumes that you have a basic understanding of kubernetes [services](https://kubernetes.io/docs/user-guide/services.md) and that you have forked the repository and [turned up a Kubernetes cluster](https://kubernetes.io/docs/getting-started-guides/):
```sh
$ cd kubernetes
$ cluster/kube-up.sh
```
### Step One: Set up Cloud SQL instance
Follow the [official instructions](https://cloud.google.com/sql/docs/getting-started) to set up Cloud SQL instance.
In the remaining part of this example we will assume that your instance is named "phabricator-db", has IP 1.2.3.4, is listening on port 3306 and the password is "1234".
### Step Two: Authenticate phabricator in Cloud SQL
In order to allow phabricator to connect to your Cloud SQL instance you need to run the following command to authorize all your nodes within a cluster:
```bash
NODE_NAMES=`kubectl get nodes | cut -d" " -f1 | tail -n+2`
NODE_IPS=`gcloud compute instances list $NODE_NAMES | tr -s " " | cut -d" " -f 5 | tail -n+2`
gcloud sql instances patch phabricator-db --authorized-networks $NODE_IPS
```
Otherwise you will see the following logs:
```bash
$ kubectl logs phabricator-controller-02qp4
[...]
Raw MySQL Error: Attempt to connect to root@1.2.3.4 failed with error
#2013: Lost connection to MySQL server at 'reading initial communication packet', system error: 0.
```
### Step Three: Turn up the phabricator
To start Phabricator server use the file [`examples/phabricator/phabricator-controller.json`](phabricator-controller.json) which describes a [replication controller](https://kubernetes.io/docs/user-guide/replication-controller.md) with a single [pod](https://kubernetes.io/docs/user-guide/pods.md) running an Apache server with Phabricator PHP source:
<!-- BEGIN MUNGE: EXAMPLE phabricator-controller.json -->
```json
{
"kind": "ReplicationController",
"apiVersion": "v1",
"metadata": {
"name": "phabricator-controller",
"labels": {
"name": "phabricator"
}
},
"spec": {
"replicas": 1,
"selector": {
"name": "phabricator"
},
"template": {
"metadata": {
"labels": {
"name": "phabricator"
}
},
"spec": {
"containers": [
{
"name": "phabricator",
"image": "fgrzadkowski/example-php-phabricator",
"ports": [
{
"name": "http-server",
"containerPort": 80
}
],
"env": [
{
"name": "MYSQL_SERVICE_IP",
"value": "1.2.3.4"
},
{
"name": "MYSQL_SERVICE_PORT",
"value": "3306"
},
{
"name": "MYSQL_PASSWORD",
"value": "1234"
}
]
}
]
}
}
}
}
```
[Download example](phabricator-controller.json?raw=true)
<!-- END MUNGE: EXAMPLE phabricator-controller.json -->
Create the phabricator pod in your Kubernetes cluster by running:
```sh
$ kubectl create -f examples/phabricator/phabricator-controller.json
```
**Note:** Remember to substitute environment variable values in json file before create replication controller.
Once that's up you can list the pods in the cluster, to verify that it is running:
```sh
kubectl get pods
```
You'll see a single phabricator pod. It will also display the machine that the pod is running on once it gets placed (may take up to thirty seconds):
```
NAME READY STATUS RESTARTS AGE
phabricator-controller-9vy68 1/1 Running 0 1m
```
If you ssh to that machine, you can run `docker ps` to see the actual pod:
```sh
me@workstation$ gcloud compute ssh --zone us-central1-b kubernetes-node-2
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
54983bc33494 fgrzadkowski/phabricator:latest "/run.sh" 2 hours ago Up 2 hours k8s_phabricator.d6b45054_phabricator-controller-02qp4.default.api_eafb1e53-b6a9-11e4-b1ae-42010af05ea6_01c2c4ca
```
(Note that initial `docker pull` may take a few minutes, depending on network conditions. During this time, the `get pods` command will return `Pending` because the container has not yet started )
### Step Four: Turn up the phabricator service
A Kubernetes 'service' is a named load balancer that proxies traffic to one or more containers. The services in a Kubernetes cluster are discoverable inside other containers via *environment variables*. Services find the containers to load balance based on pod labels. These environment variables are typically referenced in application code, shell scripts, or other places where one node needs to talk to another in a distributed system. You should catch up on [kubernetes services](https://kubernetes.io/docs/user-guide/services.md) before proceeding.
The pod that you created in Step Three has the label `name=phabricator`. The selector field of the service determines which pods will receive the traffic sent to the service.
Use the file [`examples/phabricator/phabricator-service.json`](phabricator-service.json):
<!-- BEGIN MUNGE: EXAMPLE phabricator-service.json -->
```json
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "phabricator"
},
"spec": {
"ports": [
{
"port": 80,
"targetPort": "http-server"
}
],
"selector": {
"name": "phabricator"
},
"type": "LoadBalancer"
}
}
```
[Download example](phabricator-service.json?raw=true)
<!-- END MUNGE: EXAMPLE phabricator-service.json -->
To create the service run:
```sh
$ kubectl create -f examples/phabricator/phabricator-service.json
phabricator
```
To play with the service itself, find the external IP of the load balancer:
```console
$ kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
kubernetes component=apiserver,provider=kubernetes <none> 10.0.0.1 443/TCP
phabricator <none> name=phabricator 10.0.31.173 80/TCP
$ kubectl get services phabricator -o json | grep ingress -A 4
"ingress": [
{
"ip": "104.197.13.125"
}
]
```
and then visit port 80 of that IP address.
**Note**: Provisioning of the external IP address may take few minutes.
**Note**: You may need to open the firewall for port 80 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-node`:
```sh
$ gcloud compute firewall-rules create phabricator-node-80 --allow=tcp:80 --target-tags kubernetes-node
```
### Step Six: Cleanup
To turn down a Kubernetes cluster:
```sh
$ cluster/kube-down.sh
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/phabricator/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/phabricator/README.md](https://github.com/kubernetes/examples/blob/master/staging/phabricator/README.md)

View File

@ -1,196 +1 @@
## PSP RBAC Example
This example demonstrates the usage of *PodSecurityPolicy* to control access to privileged containers
based on role and groups.
### Prerequisites
The server must be started to enable the appropriate APIs and flags
1. allow privileged containers
1. allow security contexts
1. enable RBAC and accept any token
1. enable PodSecurityPolicies
1. use the PodSecurityPolicy admission controller
If you are using the `local-up-cluster.sh` script you may enable these settings with the following syntax
```
PSP_ADMISSION=true ALLOW_PRIVILEGED=true ALLOW_SECURITY_CONTEXT=true ALLOW_ANY_TOKEN=true ENABLE_RBAC=true RUNTIME_CONFIG="extensions/v1beta1=true,extensions/v1beta1/podsecuritypolicy=true" hack/local-up-cluster.sh
```
### Using the protected port
It is important to note that this example uses the following syntax to test with RBAC
1. `--server=https://127.0.0.1:6443`: when performing requests this ensures that the protected port is used so
that RBAC will be enforced
1. `--token={user}/{group(s)}`: this syntax allows a request to specify the username and groups to use for
testing. It relies on the `ALLOW_ANY_TOKEN` setting.
## Creating the policies, roles, and bindings
### Policies
The first step to enforcing cluster constraints via PSP is to create your policies. In this
example we will use two policies, `restricted` and `privileged`. For simplicity, the only difference
between these policies is the ability to run a privileged container.
```yaml
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
name: privileged
spec:
fsGroup:
rule: RunAsAny
privileged: true
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'
---
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted
spec:
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'
```
To create these policies run
```
$ kubectl --server=https://127.0.0.1:6443 --token=foo/system:masters create -f examples/podsecuritypolicy/rbac/policies.yaml
podsecuritypolicy "privileged" created
podsecuritypolicy "restricted" created
```
### Roles and bindings
In order to create a pod, either the creating user or the service account
specified by the pod must be authorized to use a `PodSecurityPolicy` object
that allows the pod. That authorization is determined by the ability to perform
the `use` verb on a particular `podsecuritypolicies` resource. The `use` verb
is a special verb that grants access to use a policy while not permitting any
other access. For this example, we'll first create RBAC `ClusterRoles` that
enable access to `use` specific policies.
1. `restricted-psp-user`: this role allows the `use` verb on the `restricted` policy only
2. `privileged-psp-user`: this role allows the `use` verb on the `privileged` policy only
We can then create `ClusterRoleBindings` to grant groups of users the
"restricted" and/or "privileged" `ClusterRoles`. In this example, the bindings
grant the following roles to groups.
1. `privileged`: this group is bound to the `privilegedPSP` role and `restrictedPSP` role which gives users
in this group access to both policies.
1. `restricted`: this group is bound to the `restrictedPSP` role.
1. `system:authenticated`: this is a system group for any authenticated user. It is bound to the `edit`
role which is already provided by the cluster.
To create these roles and bindings run
```
$ kubectl --server=https://127.0.0.1:6443 --token=foo/system:masters create -f examples/podsecuritypolicy/rbac/roles.yaml
clusterrole "restricted-psp-user" created
clusterrole "privileged-psp-user" created
$ kubectl --server=https://127.0.0.1:6443 --token=foo/system:masters create -f examples/podsecuritypolicy/rbac/bindings.yaml
clusterrolebinding "privileged-psp-users" created
clusterrolebinding "restricted-psp-users" created
clusterrolebinding "edit" created
```
## Testing access
### Restricted user can create non-privileged pods
Create the pod
```
$ kubectl --server=https://127.0.0.1:6443 --token=foo/restricted-psp-users create -f examples/podsecuritypolicy/rbac/pod.yaml
pod "nginx" created
```
Check the PSP that allowed the pod
```
$ kubectl get pod nginx -o yaml | grep psp
kubernetes.io/psp: restricted
```
### Restricted user cannot create privileged pods
Delete the existing pod
```
$ kubectl delete pod nginx
pod "nginx" deleted
```
Create the privileged pod
```
$ kubectl --server=https://127.0.0.1:6443 --token=foo/restricted-psp-users create -f examples/podsecuritypolicy/rbac/pod_priv.yaml
Error from server (Forbidden): error when creating "examples/podsecuritypolicy/rbac/pod_priv.yaml": pods "nginx" is forbidden: unable to validate against any pod security policy: [spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
```
### Privileged user can create non-privileged pods
```
$ kubectl --server=https://127.0.0.1:6443 --token=foo/privileged-psp-users create -f examples/podsecuritypolicy/rbac/pod.yaml
pod "nginx" created
```
Check the PSP that allowed the pod. Note, this could be the `restricted` or `privileged` PSP since both allow
for the creation of non-privileged pods.
```
$ kubectl get pod nginx -o yaml | egrep "psp|privileged"
kubernetes.io/psp: privileged
privileged: false
```
### Privileged user can create privileged pods
Delete the existing pod
```
$ kubectl delete pod nginx
pod "nginx" deleted
```
Create the privileged pod
```
$ kubectl --server=https://127.0.0.1:6443 --token=foo/privileged-psp-users create -f examples/podsecuritypolicy/rbac/pod_priv.yaml
pod "nginx" created
```
Check the PSP that allowed the pod.
```
$ kubectl get pod nginx -o yaml | egrep "psp|privileged"
kubernetes.io/psp: privileged
privileged: true
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/podsecuritypolicy/rbac/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/podsecuritypolicy/rbac/README.md](https://github.com/kubernetes/examples/blob/master/staging/podsecuritypolicy/rbac/README.md)

View File

@ -1,285 +1 @@
## Runtime Constraints example
This example demonstrates how Kubernetes enforces runtime constraints for compute resources.
### Prerequisites
For the purpose of this example, we will spin up a 1 node cluster using the Vagrant provider that
is not running with any additional add-ons that consume node resources. This keeps our demonstration
of compute resources easier to follow by starting with an empty cluster.
```
$ export KUBERNETES_PROVIDER=vagrant
$ export NUM_NODES=1
$ export KUBE_ENABLE_CLUSTER_MONITORING=none
$ export KUBE_ENABLE_CLUSTER_DNS=false
$ export KUBE_ENABLE_CLUSTER_UI=false
$ cluster/kube-up.sh
```
We should now have a single node cluster running 0 pods.
```
$ cluster/kubectl.sh get nodes
NAME LABELS STATUS AGE
10.245.1.3 kubernetes.io/hostname=10.245.1.3 Ready 17m
$ cluster/kubectl.sh get pods --all-namespaces
```
When demonstrating runtime constraints, it's useful to show what happens when a node is under heavy load. For
this scenario, we have a single node with 2 cpus and 1GB of memory to demonstrate behavior under load, but the
results extend to multi-node scenarios.
### CPU requests
Each container in a pod may specify the amount of CPU it requests on a node. CPU requests are used at schedule time, and represent a minimum amount of CPU that should be reserved for your container to run.
When executing your container, the Kubelet maps your containers CPU requests to CFS shares in the Linux kernel. CFS CPU shares do not impose a ceiling on the actual amount of CPU the container can use. Instead, it defines a relative weight across all containers on the system for how much CPU time the container should get if there is CPU contention.
Let's demonstrate this concept using a simple container that will consume as much CPU as possible.
```
$ cluster/kubectl.sh run cpuhog \
--image=busybox \
--requests=cpu=100m \
-- md5sum /dev/urandom
```
This will create a single pod on your node that requests 1/10 of a CPU, but it has no limit on how much CPU it may actually consume
on the node.
To demonstrate this, if you SSH into your machine, you will see it is consuming as much CPU as possible on the node.
```
$ vagrant ssh node-1
$ sudo docker stats $(sudo docker ps -q)
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O
6b593b1a9658 0.00% 1.425 MB/1.042 GB 0.14% 1.038 kB/738 B
ae8ae4ffcfe4 150.06% 831.5 kB/1.042 GB 0.08% 0 B/0 B
```
As you can see, its consuming 150% of the total CPU.
If we scale our replication controller to 20 pods, we should see that each container is given an equal proportion of CPU time.
```
$ cluster/kubectl.sh scale rc/cpuhog --replicas=20
```
Once all the pods are running, you will see on your node that each container is getting approximately an equal proportion of CPU time.
```
$ sudo docker stats $(sudo docker ps -q)
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O
089e2d061dee 9.24% 786.4 kB/1.042 GB 0.08% 0 B/0 B
0be33d6e8ddb 10.48% 823.3 kB/1.042 GB 0.08% 0 B/0 B
0f4e3c4a93e0 10.43% 786.4 kB/1.042 GB 0.08% 0 B/0 B
```
Each container is getting 10% of the CPU time per their scheduling request, and we are unable to schedule more.
As you can see CPU requests are used to schedule pods to the node in a manner that provides weighted distribution of CPU time
when under contention. If the node is not being actively consumed by other containers, a container is able to burst up to as much
available CPU time as possible. If there is contention for CPU, CPU time is shared based on the requested value.
Let's delete all existing resources in preparation for the next scenario. Verify all the pods are deleted and terminated.
```
$ cluster/kubectl.sh delete rc --all
$ cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
```
### CPU limits
So what do you do if you want to control the maximum amount of CPU that your container can burst to use in order provide a consistent
level of service independent of CPU contention on the node? You can specify an upper limit on the total amount of CPU that a pod's
container may consume.
To enforce this feature, your node must run a docker version >= 1.7, and your operating system kernel must
have support for CFS quota enabled. Finally, your the Kubelet must be started with the following flag:
```
kubelet --cpu-cfs-quota=true
```
To demonstrate, let's create the same pod again, but this time set an upper limit to use 50% of a single CPU.
```
$ cluster/kubectl.sh run cpuhog \
--image=busybox \
--requests=cpu=100m \
--limits=cpu=500m \
-- md5sum /dev/urandom
```
Let's SSH into the node, and look at usage stats.
```
$ vagrant ssh node-1
$ sudo su
$ docker stats $(docker ps -q)
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O
2a196edf7de2 47.38% 835.6 kB/1.042 GB 0.08% 0 B/0 B
...
```
As you can see, the container is no longer allowed to consume all available CPU on the node. Instead, it is being limited to use
50% of a CPU over every 100ms period. As a result, the reported value will be in the range of 50% but may oscillate above and below.
Let's delete all existing resources in preparation for the next scenario. Verify all the pods are deleted and terminated.
```
$ cluster/kubectl.sh delete rc --all
$ cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
```
### Memory requests
By default, a container is able to consume as much memory on the node as possible. In order to improve placement of your
pods in the cluster, it is recommended to specify the amount of memory your container will require to run. The scheduler
will then take available node memory capacity into account prior to binding your pod to a node.
Let's demonstrate this by creating a pod that runs a single container which requests 100Mi of memory. The container will
allocate and write to 200MB of memory every 2 seconds.
```
$ cluster/kubectl.sh run memhog \
--image=derekwaynecarr/memhog \
--requests=memory=100Mi \
--command \
-- /bin/sh -c "while true; do memhog -r100 200m; sleep 1; done"
```
If you look at output of docker stats on the node:
```
$ docker stats $(docker ps -q)
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O
2badf74ae782 0.00% 1.425 MB/1.042 GB 0.14% 816 B/348 B
a320182967fa 105.81% 214.2 MB/1.042 GB 20.56% 0 B/0 B
```
As you can see, the container is using approximately 200MB of memory, and is only limited to the 1GB of memory on the node.
We scheduled against 100Mi, but have burst our memory usage to a greater value.
We refer to this as memory having __Burstable__ quality of service for this container.
Let's delete all existing resources in preparation for the next scenario. Verify all the pods are deleted and terminated.
```
$ cluster/kubectl.sh delete rc --all
$ cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
```
### Memory limits
If you specify a memory limit, you can constrain the amount of memory your container can use.
For example, let's limit our container to 200Mi of memory, and just consume 100MB.
```
$ cluster/kubectl.sh run memhog \
--image=derekwaynecarr/memhog \
--limits=memory=200Mi \
--command -- /bin/sh -c "while true; do memhog -r100 100m; sleep 1; done"
```
If you look at output of docker stats on the node:
```
$ docker stats $(docker ps -q)
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O
5a7c22ae1837 125.23% 109.4 MB/209.7 MB 52.14% 0 B/0 B
c1d7579c9291 0.00% 1.421 MB/1.042 GB 0.14% 1.038 kB/816 B
```
As you can see, we are limited to 200Mi memory, and are only consuming 109.4MB on the node.
Let's demonstrate what happens if you exceed your allowed memory usage by creating a replication controller
whose pod will keep being OOM killed because it attempts to allocate 300MB of memory, but is limited to 200Mi.
```
$ cluster/kubectl.sh run memhog-oom --image=derekwaynecarr/memhog --limits=memory=200Mi --command -- memhog -r100 300m
```
If we describe the created pod, you will see that it keeps restarting until it ultimately goes into a CrashLoopBackOff.
The reason it is killed and restarts is because it is OOMKilled as it attempts to exceed its memory limit.
```
$ cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
memhog-oom-gj9hw 0/1 CrashLoopBackOff 2 26s
$ cluster/kubectl.sh describe pods/memhog-oom-gj9hw | grep -C 3 "Terminated"
memory: 200Mi
State: Waiting
Reason: CrashLoopBackOff
Last Termination State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Wed, 23 Sep 2015 15:23:58 -0400
```
Let's clean-up before proceeding further.
```
$ cluster/kubectl.sh delete rc --all
```
### What if my node runs out of memory?
If you only schedule __Guaranteed__ memory containers, where the request is equal to the limit, then you are not in major danger of
causing an OOM event on your node. If any individual container consumes more than their specified limit, it will be killed.
If you schedule __BestEffort__ memory containers, where the request and limit is not specified, or __Burstable__ memory containers, where
the request is less than any specified limit, then it is possible that a container will request more memory than what is actually available on the node.
If this occurs, the system will attempt to prioritize the containers that are killed based on their quality of service. This is done
by using the OOMScoreAdjust feature in the Linux kernel which provides a heuristic to rank a process between -1000 and 1000. Processes
with lower values are preserved in favor of processes with higher values. The system daemons (kubelet, kube-proxy, docker) all run with
low OOMScoreAdjust values.
In simplest terms, containers with __Guaranteed__ memory containers are given a lower value than __Burstable__ containers which has
a lower value than __BestEffort__ containers. As a consequence, containers with __BestEffort__ should be killed before the other tiers.
To demonstrate this, let's spin up a set of different replication controllers that will over commit the node.
```
$ cluster/kubectl.sh run mem-guaranteed --image=derekwaynecarr/memhog --replicas=2 --requests=cpu=10m --limits=memory=600Mi --command -- memhog -r100000 500m
$ cluster/kubectl.sh run mem-burstable --image=derekwaynecarr/memhog --replicas=2 --requests=cpu=10m,memory=600Mi --command -- memhog -r100000 100m
$ cluster/kubectl.sh run mem-besteffort --replicas=10 --image=derekwaynecarr/memhog --requests=cpu=10m --command -- memhog -r10000 500m
```
This will induce a SystemOOM
```
$ cluster/kubectl.sh get events | grep OOM
43m 8m 178 10.245.1.3 Node SystemOOM {kubelet 10.245.1.3} System OOM encountered
```
If you look at the pods:
```
$ cluster/kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
...
mem-besteffort-zpnpm 0/1 CrashLoopBackOff 4 3m
mem-burstable-n0yz1 1/1 Running 0 4m
mem-burstable-q3dts 1/1 Running 0 4m
mem-guaranteed-fqsw8 1/1 Running 0 4m
mem-guaranteed-rkqso 1/1 Running 0 4m
```
You see that our BestEffort pod goes in a restart cycle, but the pods with greater levels of quality of service continue to function.
As you can see, we rely on the Kernel to react to system OOM events. Depending on how your host operating
system was configured, and which process the Kernel ultimately decides to kill on your Node, you may experience unstable results. In addition, during an OOM event, while the kernel is cleaning up processes, the system may experience significant periods of slow down or appear unresponsive. As a result, while the system allows you to overcommit on memory, we recommend to not induce a Kernel sys OOM.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/runtime-constraints/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/runtime-constraints/README.md](https://github.com/kubernetes/examples/blob/master/staging/runtime-constraints/README.md)

View File

@ -1,199 +1 @@
## Selenium on Kubernetes
Selenium is a browser automation tool used primarily for testing web applications. However when Selenium is used in a CI pipeline to test applications, there is often contention around the use of Selenium resources. This example shows you how to deploy Selenium to Kubernetes in a scalable fashion.
### Prerequisites
This example assumes you have a working Kubernetes cluster and a properly configured kubectl client. See the [Getting Started Guides](https://kubernetes.io/docs/getting-started-guides/) for details.
Google Container Engine is also a quick way to get Kubernetes up and running: https://cloud.google.com/container-engine/
Your cluster must have 4 CPU and 6 GB of RAM to complete the example up to the scaling portion.
### Deploy Selenium Grid Hub:
We will be using Selenium Grid Hub to make our Selenium install scalable via a master/worker model. The Selenium Hub is the master, and the Selenium Nodes are the workers(not to be confused with Kubernetes nodes). We only need one hub, but we're using a replication controller to ensure that the hub is always running:
```console
kubectl create --filename=examples/selenium/selenium-hub-rc.yaml
```
The Selenium Nodes will need to know how to get to the Hub, let's create a service for the nodes to connect to.
```console
kubectl create --filename=examples/selenium/selenium-hub-svc.yaml
```
### Verify Selenium Hub Deployment
Let's verify our deployment of Selenium hub by connecting to the web console.
#### Kubernetes Nodes Reachable
If your Kubernetes nodes are reachable from your network, you can verify the hub by hitting it on the nodeport. You can retrieve the nodeport by typing `kubectl describe svc selenium-hub`, however the snippet below automates that by using kubectl's template functionality:
```console
export NODEPORT=`kubectl get svc --selector='app=selenium-hub' --output=template --template="{{ with index .items 0}}{{with index .spec.ports 0 }}{{.nodePort}}{{end}}{{end}}"`
export NODE=`kubectl get nodes --output=template --template="{{with index .items 0 }}{{.metadata.name}}{{end}}"`
curl http://$NODE:$NODEPORT
```
#### Kubernetes Nodes Unreachable
If you cannot reach your Kubernetes nodes from your network, you can proxy via kubectl.
```console
export PODNAME=`kubectl get pods --selector="app=selenium-hub" --output=template --template="{{with index .items 0}}{{.metadata.name}}{{end}}"`
kubectl port-forward $PODNAME 4444:4444
```
In a separate terminal, you can now check the status.
```console
curl http://localhost:4444
```
#### Using Google Container Engine
If you are using Google Container Engine, you can expose your hub via the internet. This is a bad idea for many reasons, but you can do it as follows:
```console
kubectl expose rc selenium-hub --name=selenium-hub-external --labels="app=selenium-hub,external=true" --type=LoadBalancer
```
Then wait a few minutes, eventually your new `selenium-hub-external` service will be assigned a load balanced IP from gcloud. Once `kubectl get svc selenium-hub-external` shows two IPs, run this snippet.
```console
export INTERNET_IP=`kubectl get svc --selector="app=selenium-hub,external=true" --output=template --template="{{with index .items 0}}{{with index .status.loadBalancer.ingress 0}}{{.ip}}{{end}}{{end}}"`
curl http://$INTERNET_IP:4444/
```
You should now be able to hit `$INTERNET_IP` via your web browser, and so can everyone else on the Internet!
### Deploy Firefox and Chrome Nodes:
Now that the Hub is up, we can deploy workers.
This will deploy 2 Chrome nodes.
```console
kubectl create --filename=examples/selenium/selenium-node-chrome-rc.yaml
```
And 2 Firefox nodes to match.
```console
kubectl create --filename=examples/selenium/selenium-node-firefox-rc.yaml
```
Once the pods start, you will see them show up in the Selenium Hub interface.
### Run a Selenium Job
Let's run a quick Selenium job to validate our setup.
#### Setup Python Environment
First, we need to start a python container that we can attach to.
```console
kubectl run selenium-python --image=google/python-hello
```
Next, we need to get inside this container.
```console
export PODNAME=`kubectl get pods --selector="run=selenium-python" --output=template --template="{{with index .items 0}}{{.metadata.name}}{{end}}"`
kubectl exec --stdin=true --tty=true $PODNAME bash
```
Once inside, we need to install the Selenium library
```console
pip install selenium
```
#### Run Selenium Job with Python
We're all set up, start the python interpreter.
```console
python
```
And paste in the contents of selenium-test.py.
```python
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
def check_browser(browser):
driver = webdriver.Remote(
command_executor='http://selenium-hub:4444/wd/hub',
desired_capabilities=getattr(DesiredCapabilities, browser)
)
driver.get("http://google.com")
assert "google" in driver.page_source
driver.close()
print("Browser %s checks out!" % browser)
check_browser("FIREFOX")
check_browser("CHROME")
```
You should get
```
>>> check_browser("FIREFOX")
Browser FIREFOX checks out!
>>> check_browser("CHROME")
Browser CHROME checks out!
```
Congratulations, your Selenium Hub is up, with Firefox and Chrome nodes!
### Scale your Firefox and Chrome nodes.
If you need more Firefox or Chrome nodes, your hardware is the limit:
```console
kubectl scale rc selenium-node-firefox --replicas=10
kubectl scale rc selenium-node-chrome --replicas=10
```
You now have 10 Firefox and 10 Chrome nodes, happy Seleniuming!
### Debugging
Sometimes it is necessary to check on a hung test. Each pod is running VNC. To check on one of the browser nodes via VNC, it's recommended that you proxy, since we don't want to expose a service for every pod, and the containers have a weak VNC password. Replace POD_NAME with the name of the pod you want to connect to.
```console
kubectl port-forward $POD_NAME 5900:5900
```
Then connect to localhost:5900 with your VNC client using the password "secret"
Enjoy your scalable Selenium Grid!
Adapted from: https://github.com/SeleniumHQ/docker-selenium
### Teardown
To remove all created resources, run the following:
```console
kubectl delete rc selenium-hub
kubectl delete rc selenium-node-chrome
kubectl delete rc selenium-node-firefox
kubectl delete deployment selenium-python
kubectl delete svc selenium-hub
kubectl delete svc selenium-hub-external
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/selenium/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/selenium/README.md](https://github.com/kubernetes/examples/blob/master/staging/selenium/README.md)

View File

@ -1,187 +1 @@
# Sharing Clusters
This example demonstrates how to access one kubernetes cluster from another. It only works if both clusters are running on the same network, on a cloud provider that provides a private ip range per network (eg: GCE, GKE, AWS).
## Setup
Create a cluster in US (you don't need to do this if you already have a running kubernetes cluster)
```shell
$ cluster/kube-up.sh
```
Before creating our second cluster, lets have a look at the kubectl config:
```yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://104.197.84.16
name: <clustername_us>
...
current-context: <clustername_us>
...
```
Now spin up the second cluster in Europe
```shell
$ ./cluster/kube-up.sh
$ KUBE_GCE_ZONE=europe-west1-b KUBE_GCE_INSTANCE_PREFIX=eu ./cluster/kube-up.sh
```
Your kubectl config should contain both clusters:
```yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://146.148.25.221
name: <clustername_eu>
- cluster:
certificate-authority-data: REDACTED
server: https://104.197.84.16
name: <clustername_us>
...
current-context: kubernetesdev_eu
...
```
And kubectl get nodes should agree:
```
$ kubectl get nodes
NAME LABELS STATUS
eu-node-0n61 kubernetes.io/hostname=eu-node-0n61 Ready
eu-node-79ua kubernetes.io/hostname=eu-node-79ua Ready
eu-node-7wz7 kubernetes.io/hostname=eu-node-7wz7 Ready
eu-node-loh2 kubernetes.io/hostname=eu-node-loh2 Ready
$ kubectl config use-context <clustername_us>
$ kubectl get nodes
NAME LABELS STATUS
kubernetes-node-5jtd kubernetes.io/hostname=kubernetes-node-5jtd Ready
kubernetes-node-lqfc kubernetes.io/hostname=kubernetes-node-lqfc Ready
kubernetes-node-sjra kubernetes.io/hostname=kubernetes-node-sjra Ready
kubernetes-node-wul8 kubernetes.io/hostname=kubernetes-node-wul8 Ready
```
## Testing reachability
For this test to work we'll need to create a service in europe:
```
$ kubectl config use-context <clustername_eu>
$ kubectl create -f /tmp/secret.json
$ kubectl create -f examples/https-nginx/nginx-app.yaml
$ kubectl exec -it my-nginx-luiln -- echo "Europe nginx" >> /usr/share/nginx/html/index.html
$ kubectl get ep
NAME ENDPOINTS
kubernetes 10.240.249.92:443
nginxsvc 10.244.0.4:80,10.244.0.4:443
```
Just to test reachability, we'll try hitting the Europe nginx from our initial US central cluster. Create a basic curl pod in the US cluster:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: curlpod
spec:
containers:
- image: radial/busyboxplus:curl
command:
- sleep
- "360000000"
imagePullPolicy: IfNotPresent
name: curlcontainer
restartPolicy: Always
```
And test that you can actually reach the test nginx service across continents
```
$ kubectl config use-context <clustername_us>
$ kubectl -it exec curlpod -- /bin/sh
[ root@curlpod:/ ]$ curl http://10.244.0.4:80
Europe nginx
```
## Granting access to the remote cluster
We will grant the US cluster access to the Europe cluster. Basically we're going to setup a secret that allows kubectl to function in a pod running in the US cluster, just like it did on our local machine in the previous step. First create a secret with the contents of the current .kube/config:
```shell
$ kubectl config use-context <clustername_eu>
$ go run ./make_secret.go --kubeconfig=$HOME/.kube/config > /tmp/secret.json
$ kubectl config use-context <clustername_us>
$ kubectl create -f /tmp/secret.json
```
Create a kubectl pod that uses the secret, in the US cluster.
```json
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kubectl-tester"
},
"spec": {
"volumes": [
{
"name": "secret-volume",
"secret": {
"secretName": "kubeconfig"
}
}
],
"containers": [
{
"name": "kubectl",
"image": "bprashanth/kubectl:0.0",
"imagePullPolicy": "Always",
"env": [
{
"name": "KUBECONFIG",
"value": "/.kube/config"
}
],
"args": [
"proxy", "-p", "8001"
],
"volumeMounts": [
{
"name": "secret-volume",
"mountPath": "/.kube"
}
]
}
]
}
}
```
And check that you can access the remote cluster
```shell
$ kubectl config use-context <clustername_us>
$ kubectl exec -it kubectl-tester bash
kubectl-tester $ kubectl get nodes
NAME LABELS STATUS
eu-node-0n61 kubernetes.io/hostname=eu-node-0n61 Ready
eu-node-79ua kubernetes.io/hostname=eu-node-79ua Ready
eu-node-7wz7 kubernetes.io/hostname=eu-node-7wz7 Ready
eu-node-loh2 kubernetes.io/hostname=eu-node-loh2 Ready
```
For a more advanced example of sharing clusters, see the [service-loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer/README.md)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/sharing-clusters/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/sharing-clusters/README.md](https://github.com/kubernetes/examples/blob/master/staging/sharing-clusters/README.md)

View File

@ -1,373 +1 @@
# Spark example
Following this example, you will create a functional [Apache
Spark](http://spark.apache.org/) cluster using Kubernetes and
[Docker](http://docker.io).
You will setup a Spark master service and a set of Spark workers using Spark's [standalone mode](http://spark.apache.org/docs/latest/spark-standalone.html).
For the impatient expert, jump straight to the [tl;dr](#tldr)
section.
### Sources
The Docker images are heavily based on https://github.com/mattf/docker-spark.
And are curated in https://github.com/kubernetes/application-images/tree/master/spark
The Spark UI Proxy is taken from https://github.com/aseigneurin/spark-ui-proxy.
The PySpark examples are taken from http://stackoverflow.com/questions/4114167/checking-if-a-number-is-a-prime-number-in-python/27946768#27946768
## Step Zero: Prerequisites
This example assumes
- You have a Kubernetes cluster installed and running.
- That you have installed the ```kubectl``` command line tool installed in your path and configured to talk to your Kubernetes cluster
- That your Kubernetes cluster is running [kube-dns](https://github.com/kubernetes/dns) or an equivalent integration.
Optionally, your Kubernetes cluster should be configured with a Loadbalancer integration (automatically configured via kube-up or GKE)
## Step One: Create namespace
```sh
$ kubectl create -f examples/spark/namespace-spark-cluster.yaml
```
Now list all namespaces:
```sh
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
spark-cluster name=spark-cluster Active
```
To configure kubectl to work with our namespace, we will create a new context using our current context as a base:
```sh
$ CURRENT_CONTEXT=$(kubectl config view -o jsonpath='{.current-context}')
$ USER_NAME=$(kubectl config view -o jsonpath='{.contexts[?(@.name == "'"${CURRENT_CONTEXT}"'")].context.user}')
$ CLUSTER_NAME=$(kubectl config view -o jsonpath='{.contexts[?(@.name == "'"${CURRENT_CONTEXT}"'")].context.cluster}')
$ kubectl config set-context spark --namespace=spark-cluster --cluster=${CLUSTER_NAME} --user=${USER_NAME}
$ kubectl config use-context spark
```
## Step Two: Start your Master service
The Master [service](https://kubernetes.io/docs/user-guide/services.md) is the master service
for a Spark cluster.
Use the
[`examples/spark/spark-master-controller.yaml`](spark-master-controller.yaml)
file to create a
[replication controller](https://kubernetes.io/docs/user-guide/replication-controller.md)
running the Spark Master service.
```console
$ kubectl create -f examples/spark/spark-master-controller.yaml
replicationcontroller "spark-master-controller" created
```
Then, use the
[`examples/spark/spark-master-service.yaml`](spark-master-service.yaml) file to
create a logical service endpoint that Spark workers can use to access the
Master pod:
```console
$ kubectl create -f examples/spark/spark-master-service.yaml
service "spark-master" created
```
### Check to see if Master is running and accessible
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
spark-master-controller-5u0q5 1/1 Running 0 8m
```
Check logs to see the status of the master. (Use the pod retrieved from the previous output.)
```sh
$ kubectl logs spark-master-controller-5u0q5
starting org.apache.spark.deploy.master.Master, logging to /opt/spark-1.5.1-bin-hadoop2.6/sbin/../logs/spark--org.apache.spark.deploy.master.Master-1-spark-master-controller-g0oao.out
Spark Command: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -cp /opt/spark-1.5.1-bin-hadoop2.6/sbin/../conf/:/opt/spark-1.5.1-bin-hadoop2.6/lib/spark-assembly-1.5.1-hadoop2.6.0.jar:/opt/spark-1.5.1-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/opt/spark-1.5.1-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/opt/spark-1.5.1-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar -Xms1g -Xmx1g org.apache.spark.deploy.master.Master --ip spark-master --port 7077 --webui-port 8080
========================================
15/10/27 21:25:05 INFO Master: Registered signal handlers for [TERM, HUP, INT]
15/10/27 21:25:05 INFO SecurityManager: Changing view acls to: root
15/10/27 21:25:05 INFO SecurityManager: Changing modify acls to: root
15/10/27 21:25:05 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
15/10/27 21:25:06 INFO Slf4jLogger: Slf4jLogger started
15/10/27 21:25:06 INFO Remoting: Starting remoting
15/10/27 21:25:06 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkMaster@spark-master:7077]
15/10/27 21:25:06 INFO Utils: Successfully started service 'sparkMaster' on port 7077.
15/10/27 21:25:07 INFO Master: Starting Spark master at spark://spark-master:7077
15/10/27 21:25:07 INFO Master: Running Spark version 1.5.1
15/10/27 21:25:07 INFO Utils: Successfully started service 'MasterUI' on port 8080.
15/10/27 21:25:07 INFO MasterWebUI: Started MasterWebUI at http://spark-master:8080
15/10/27 21:25:07 INFO Utils: Successfully started service on port 6066.
15/10/27 21:25:07 INFO StandaloneRestServer: Started REST server for submitting applications on port 6066
15/10/27 21:25:07 INFO Master: I have been elected leader! New state: ALIVE
```
Once the master is started, we'll want to check the Spark WebUI. In order to access the Spark WebUI, we will deploy a [specialized proxy](https://github.com/aseigneurin/spark-ui-proxy). This proxy is neccessary to access worker logs from the Spark UI.
Deploy the proxy controller with [`examples/spark/spark-ui-proxy-controller.yaml`](spark-ui-proxy-controller.yaml):
```console
$ kubectl create -f examples/spark/spark-ui-proxy-controller.yaml
replicationcontroller "spark-ui-proxy-controller" created
```
We'll also need a corresponding Loadbalanced service for our Spark Proxy [`examples/spark/spark-ui-proxy-service.yaml`](spark-ui-proxy-service.yaml):
```console
$ kubectl create -f examples/spark/spark-ui-proxy-service.yaml
service "spark-ui-proxy" created
```
After creating the service, you should eventually get a loadbalanced endpoint:
```console
$ kubectl get svc spark-ui-proxy -o wide
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
spark-ui-proxy 10.0.51.107 aad59283284d611e6839606c214502b5-833417581.us-east-1.elb.amazonaws.com 80/TCP 9m component=spark-ui-proxy
```
The Spark UI in the above example output will be available at http://aad59283284d611e6839606c214502b5-833417581.us-east-1.elb.amazonaws.com
If your Kubernetes cluster is not equipped with a Loadbalancer integration, you will need to use the [kubectl proxy](https://kubernetes.io/docs/user-guide/accessing-the-cluster.md#using-kubectl-proxy) to
connect to the Spark WebUI:
```console
kubectl proxy --port=8001
```
At which point the UI will be available at
[http://localhost:8001/api/v1/proxy/namespaces/spark-cluster/services/spark-master:8080/](http://localhost:8001/api/v1/proxy/namespaces/spark-cluster/services/spark-master:8080/).
## Step Three: Start your Spark workers
The Spark workers do the heavy lifting in a Spark cluster. They
provide execution resources and data cache capabilities for your
program.
The Spark workers need the Master service to be running.
Use the [`examples/spark/spark-worker-controller.yaml`](spark-worker-controller.yaml) file to create a
[replication controller](https://kubernetes.io/docs/user-guide/replication-controller.md) that manages the worker pods.
```console
$ kubectl create -f examples/spark/spark-worker-controller.yaml
replicationcontroller "spark-worker-controller" created
```
### Check to see if the workers are running
If you launched the Spark WebUI, your workers should just appear in the UI when
they're ready. (It may take a little bit to pull the images and launch the
pods.) You can also interrogate the status in the following way:
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
spark-master-controller-5u0q5 1/1 Running 0 25m
spark-worker-controller-e8otp 1/1 Running 0 6m
spark-worker-controller-fiivl 1/1 Running 0 6m
spark-worker-controller-ytc7o 1/1 Running 0 6m
$ kubectl logs spark-master-controller-5u0q5
[...]
15/10/26 18:20:14 INFO Master: Registering worker 10.244.1.13:53567 with 2 cores, 6.3 GB RAM
15/10/26 18:20:14 INFO Master: Registering worker 10.244.2.7:46195 with 2 cores, 6.3 GB RAM
15/10/26 18:20:14 INFO Master: Registering worker 10.244.3.8:39926 with 2 cores, 6.3 GB RAM
```
## Step Four: Start the Zeppelin UI to launch jobs on your Spark cluster
The Zeppelin UI pod can be used to launch jobs into the Spark cluster either via
a web notebook frontend or the traditional Spark command line. See
[Zeppelin](https://zeppelin.incubator.apache.org/) and
[Spark architecture](https://spark.apache.org/docs/latest/cluster-overview.html)
for more details.
Deploy Zeppelin:
```console
$ kubectl create -f examples/spark/zeppelin-controller.yaml
replicationcontroller "zeppelin-controller" created
```
And the corresponding service:
```console
$ kubectl create -f examples/spark/zeppelin-service.yaml
service "zeppelin" created
```
Zeppelin needs the spark-master service to be running.
### Check to see if Zeppelin is running
```console
$ kubectl get pods -l component=zeppelin
NAME READY STATUS RESTARTS AGE
zeppelin-controller-ja09s 1/1 Running 0 53s
```
## Step Five: Do something with the cluster
Now you have two choices, depending on your predilections. You can do something
graphical with the Spark cluster, or you can stay in the CLI.
For both choices, we will be working with this Python snippet:
```python
from math import sqrt; from itertools import count, islice
def isprime(n):
return n > 1 and all(n%i for i in islice(count(2), int(sqrt(n)-1)))
nums = sc.parallelize(xrange(10000000))
print nums.filter(isprime).count()
```
### Do something fast with pyspark!
Simply copy and paste the python snippet into pyspark from within the zeppelin pod:
```console
$ kubectl exec zeppelin-controller-ja09s -it pyspark
Python 2.7.9 (default, Mar 1 2015, 12:57:24)
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 1.5.1
/_/
Using Python version 2.7.9 (default, Mar 1 2015 12:57:24)
SparkContext available as sc, HiveContext available as sqlContext.
>>> from math import sqrt; from itertools import count, islice
>>>
>>> def isprime(n):
... return n > 1 and all(n%i for i in islice(count(2), int(sqrt(n)-1)))
...
>>> nums = sc.parallelize(xrange(10000000))
>>> print nums.filter(isprime).count()
664579
```
Congratulations, you now know how many prime numbers there are within the first 10 million numbers!
### Do something graphical and shiny!
Creating the Zeppelin service should have yielded you a Loadbalancer endpoint:
```console
$ kubectl get svc zeppelin -o wide
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
zeppelin 10.0.154.1 a596f143884da11e6839506c114532b5-121893930.us-east-1.elb.amazonaws.com 80/TCP 3m component=zeppelin
```
If your Kubernetes cluster does not have a Loadbalancer integration, then we will have to use port forwarding.
Take the Zeppelin pod from before and port-forward the WebUI port:
```console
$ kubectl port-forward zeppelin-controller-ja09s 8080:8080
```
This forwards `localhost` 8080 to container port 8080. You can then find
Zeppelin at [http://localhost:8080/](http://localhost:8080/).
Once you've loaded up the Zeppelin UI, create a "New Notebook". In there we will paste our python snippet, but we need to add a `%pyspark` hint for Zeppelin to understand it:
```
%pyspark
from math import sqrt; from itertools import count, islice
def isprime(n):
return n > 1 and all(n%i for i in islice(count(2), int(sqrt(n)-1)))
nums = sc.parallelize(xrange(10000000))
print nums.filter(isprime).count()
```
After pasting in our code, press shift+enter or click the play icon to the right of our snippet. The Spark job will run and once again we'll have our result!
## Result
You now have services and replication controllers for the Spark master, Spark
workers and Spark driver. You can take this example to the next step and start
using the Apache Spark cluster you just created, see
[Spark documentation](https://spark.apache.org/documentation.html) for more
information.
## tl;dr
```console
kubectl create -f examples/spark
```
After it's setup:
```console
kubectl get pods # Make sure everything is running
kubectl get svc -o wide # Get the Loadbalancer endpoints for spark-ui-proxy and zeppelin
```
At which point the Master UI and Zeppelin will be available at the URLs under the `EXTERNAL-IP` field.
You can also interact with the Spark cluster using the traditional `spark-shell` /
`spark-subsubmit` / `pyspark` commands by using `kubectl exec` against the
`zeppelin-controller` pod.
If your Kubernetes cluster does not have a Loadbalancer integration, use `kubectl proxy` and `kubectl port-forward` to access the Spark UI and Zeppelin.
For Spark UI:
```console
kubectl proxy --port=8001
```
Then visit [http://localhost:8001/api/v1/proxy/namespaces/spark-cluster/services/spark-ui-proxy/](http://localhost:8001/api/v1/proxy/namespaces/spark-cluster/services/spark-ui-proxy/).
For Zeppelin:
```console
kubectl port-forward zeppelin-controller-abc123 8080:8080 &
```
Then visit [http://localhost:8080/](http://localhost:8080/).
## Known Issues With Spark
* This provides a Spark configuration that is restricted to the cluster network,
meaning the Spark master is only available as a cluster service. If you need
to submit jobs using external client other than Zeppelin or `spark-submit` on
the `zeppelin` pod, you will need to provide a way for your clients to get to
the
[`examples/spark/spark-master-service.yaml`](spark-master-service.yaml). See
[Services](https://kubernetes.io/docs/user-guide/services.md) for more information.
## Known Issues With Zeppelin
* The Zeppelin pod is large, so it may take a while to pull depending on your
network. The size of the Zeppelin pod is something we're working on, see issue #17231.
* Zeppelin may take some time (about a minute) on this pipeline the first time
you run it. It seems to take considerable time to load.
* On GKE, `kubectl port-forward` may not be stable over long periods of time. If
you see Zeppelin go into `Disconnected` state (there will be a red dot on the
top right as well), the `port-forward` probably failed and needs to be
restarted. See #12179.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/spark/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/spark/README.md](https://github.com/kubernetes/examples/blob/master/staging/spark/README.md)

View File

@ -1,123 +1 @@
# Spark on GlusterFS example
This guide is an extension of the standard [Spark on Kubernetes Guide](../../../examples/spark/) and describes how to run Spark on GlusterFS using the [Kubernetes Volume Plugin for GlusterFS](../../../examples/volumes/glusterfs/)
The setup is the same in that you will setup a Spark Master Service in the same way you do with the standard Spark guide but you will deploy a modified Spark Master and a Modified Spark Worker ReplicationController, as they will be modified to use the GlusterFS volume plugin to mount a GlusterFS volume into the Spark Master and Spark Workers containers. Note that this example can be used as a guide for implementing any of the Kubernetes Volume Plugins with the Spark Example.
[There is also a video available that provides a walkthrough for how to set this solution up](https://youtu.be/xyIaoM0-gM0)
## Step Zero: Prerequisites
This example assumes that you have been able to successfully get the standard Spark Example working in Kubernetes and that you have a GlusterFS cluster that is accessible from your Kubernetes cluster. It is also recommended that you are familiar with the GlusterFS Volume Plugin and how to configure it.
## Step One: Define the endpoints for your GlusterFS Cluster
Modify the `examples/spark/spark-gluster/glusterfs-endpoints.yaml` file to list the IP addresses of some of the servers in your GlusterFS cluster. The GlusterFS Volume Plugin uses these IP addresses to perform a Fuse Mount of the GlusterFS Volume into the Spark Worker Containers that are launched by the ReplicationController in the next section.
Register your endpoints by running the following command:
```console
$ kubectl create -f examples/spark/spark-gluster/glusterfs-endpoints.yaml
```
## Step Two: Modify and Submit your Spark Master ReplicationController
Modify the `examples/spark/spark-gluster/spark-master-controller.yaml` file to reflect the GlusterFS Volume that you wish to use in the PATH parameter of the volumes subsection.
Submit the Spark Master Pod
```console
$ kubectl create -f examples/spark/spark-gluster/spark-master-controller.yaml
```
Verify that the Spark Master Pod deployed successfully.
```console
$ kubectl get pods
```
Submit the Spark Master Service
```console
$ kubectl create -f examples/spark/spark-gluster/spark-master-service.yaml
```
Verify that the Spark Master Service deployed successfully.
```console
$ kubectl get services
```
## Step Three: Start your Spark workers
Modify the `examples/spark/spark-gluster/spark-worker-controller.yaml` file to reflect the GlusterFS Volume that you wish to use in the PATH parameter of the Volumes subsection.
Make sure that the replication factor for the pods is not greater than the amount of Kubernetes nodes available in your Kubernetes cluster.
Submit your Spark Worker ReplicationController by running the following command:
```console
$ kubectl create -f examples/spark/spark-gluster/spark-worker-controller.yaml
```
Verify that the Spark Worker ReplicationController deployed its pods successfully.
```console
$ kubectl get pods
```
Follow the steps from the standard example to verify the Spark Worker pods have registered successfully with the Spark Master.
## Step Four: Submit a Spark Job
All the Spark Workers and the Spark Master in your cluster have a mount to GlusterFS. This means that any of them can be used as the Spark Client to submit a job. For simplicity, lets use the Spark Master as an example.
The Spark Worker and Spark Master containers include a setup_client utility script that takes two parameters, the Service IP of the Spark Master and the port that it is running on. This must be to setup the container as a Spark client prior to submitting any Spark Jobs.
Obtain the Service IP (listed as IP:) and Full Pod Name by running
```console
$ kubectl describe pod spark-master-controller
```
Now we will shell into the Spark Master Container and run a Spark Job. In the example below, we are running the Spark Wordcount example and specifying the input and output directory at the location where GlusterFS is mounted in the Spark Master Container. This will submit the job to the Spark Master who will distribute the work to all the Spark Worker Containers.
All the Spark Worker containers will be able to access the data as they all have the same GlusterFS volume mounted at /mnt/glusterfs. The reason we are submitting the job from a Spark Worker and not an additional Spark Base container (as in the standard Spark Example) is due to the fact that the Spark instance submitting the job must be able to access the data. Only the Spark Master and Spark Worker containers have GlusterFS mounted.
The Spark Worker and Spark Master containers include a setup_client utility script that takes two parameters, the Service IP of the Spark Master and the port that it is running on. This must be done to setup the container as a Spark client prior to submitting any Spark Jobs.
Shell into the Master Spark Node (spark-master-controller) by running
```console
kubectl exec spark-master-controller-<ID> -i -t -- bash -i
root@spark-master-controller-c1sqd:/# . /setup_client.sh <Service IP> 7077
root@spark-master-controller-c1sqd:/# pyspark
Python 2.7.9 (default, Mar 1 2015, 12:57:24)
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
15/06/26 14:25:28 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 1.4.0
/_/
Using Python version 2.7.9 (default, Mar 1 2015 12:57:24)
SparkContext available as sc, HiveContext available as sqlContext.
>>> file = sc.textFile("/mnt/glusterfs/somefile.txt")
>>> counts = file.flatMap(lambda line: line.split(" ")).map(lambda word: (word, 1)).reduceByKey(lambda a, b: a + b)
>>> counts.saveAsTextFile("/mnt/glusterfs/output")
```
While still in the container, you can see the output of your Spark Job in the Distributed File System by running the following:
```console
root@spark-master-controller-c1sqd:/# ls -l /mnt/glusterfs/output
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/spark/spark-gluster/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/spark/spark-gluster/README.md](https://github.com/kubernetes/examples/blob/master/staging/spark/spark-gluster/README.md)

View File

@ -1,854 +1 @@
# Cloud Native Deployments of Cassandra using Kubernetes
## Table of Contents
- [Prerequisites](#prerequisites)
- [Cassandra Docker](#cassandra-docker)
- [Quickstart](#quickstart)
- [Step 1: Create a Cassandra Headless Service](#step-1-create-a-cassandra-headless-service)
- [Step 2: Use a StatefulSet to create Cassandra Ring](#step-2-use-a-statefulset-to-create-cassandra-ring)
- [Step 3: Validate and Modify The Cassandra StatefulSet](#step-3-validate-and-modify-the-cassandra-statefulset)
- [Step 4: Delete Cassandra StatefulSet](#step-4-delete-cassandra-statefulset)
- [Step 5: Use a Replication Controller to create Cassandra node pods](#step-5-use-a-replication-controller-to-create-cassandra-node-pods)
- [Step 6: Scale up the Cassandra cluster](#step-6-scale-up-the-cassandra-cluster)
- [Step 7: Delete the Replication Controller](#step-7-delete-the-replication-controller)
- [Step 8: Use a DaemonSet instead of a Replication Controller](#step-8-use-a-daemonset-instead-of-a-replication-controller)
- [Step 9: Resource Cleanup](#step-9-resource-cleanup)
- [Seed Provider Source](#seed-provider-source)
The following document describes the development of a _cloud native_
[Cassandra](http://cassandra.apache.org/) deployment on Kubernetes. When we say
_cloud native_, we mean an application which understands that it is running
within a cluster manager, and uses this cluster management infrastructure to
help implement the application. In particular, in this instance, a custom
Cassandra `SeedProvider` is used to enable Cassandra to dynamically discover
new Cassandra nodes as they join the cluster.
This example also uses some of the core components of Kubernetes:
- [_Pods_](https://kubernetes.io/docs/user-guide/pods.md)
- [ _Services_](https://kubernetes.io/docs/user-guide/services.md)
- [_Replication Controllers_](https://kubernetes.io/docs/user-guide/replication-controller.md)
- [_Stateful Sets_](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/)
- [_Daemon Sets_](https://kubernetes.io/docs/admin/daemons.md)
## Prerequisites
This example assumes that you have a Kubernetes version >=1.2 cluster installed and running,
and that you have installed the [`kubectl`](https://kubernetes.io/docs/user-guide/kubectl/kubectl.md)
command line tool somewhere in your path. Please see the
[getting started guides](https://kubernetes.io/docs/getting-started-guides/)
for installation instructions for your platform.
This example also has a few code and configuration files needed. To avoid
typing these out, you can `git clone` the Kubernetes repository to your local
computer.
## Cassandra Docker
The pods use the [```gcr.io/google-samples/cassandra:v12```](image/Dockerfile)
image from Google's [container registry](https://cloud.google.com/container-registry/docs/).
The docker is based on `debian:jessie` and includes OpenJDK 8. This image
includes a standard Cassandra installation from the Apache Debian repo. Through the use of environment variables you are able to change values that are inserted into the `cassandra.yaml`.
| ENV VAR | DEFAULT VALUE |
| ------------- |:-------------: |
| CASSANDRA_CLUSTER_NAME | 'Test Cluster' |
| CASSANDRA_NUM_TOKENS | 32 |
| CASSANDRA_RPC_ADDRESS | 0.0.0.0 |
## Quickstart
If you want to jump straight to the commands we will run,
here are the steps:
```sh
#
# StatefulSet
#
# create a service to track all cassandra statefulset nodes
kubectl create -f examples/storage/cassandra/cassandra-service.yaml
# create a statefulset
kubectl create -f examples/storage/cassandra/cassandra-statefulset.yaml
# validate the Cassandra cluster. Substitute the name of one of your pods.
kubectl exec -ti cassandra-0 -- nodetool status
# cleanup
grace=$(kubectl get po cassandra-0 --template '{{.spec.terminationGracePeriodSeconds}}') \
&& kubectl delete statefulset,po -l app=cassandra \
&& echo "Sleeping $grace" \
&& sleep $grace \
&& kubectl delete pvc -l app=cassandra
#
# Resource Controller Example
#
# create a replication controller to replicate cassandra nodes
kubectl create -f examples/storage/cassandra/cassandra-controller.yaml
# validate the Cassandra cluster. Substitute the name of one of your pods.
kubectl exec -ti cassandra-xxxxx -- nodetool status
# scale up the Cassandra cluster
kubectl scale rc cassandra --replicas=4
# delete the replication controller
kubectl delete rc cassandra
#
# Create a DaemonSet to place a cassandra node on each kubernetes node
#
kubectl create -f examples/storage/cassandra/cassandra-daemonset.yaml --validate=false
# resource cleanup
kubectl delete service -l app=cassandra
kubectl delete daemonset cassandra
```
## Step 1: Create a Cassandra Headless Service
A Kubernetes _[Service](https://kubernetes.io/docs/user-guide/services.md)_ describes a set of
[_Pods_](https://kubernetes.io/docs/user-guide/pods.md) that perform the same task. In
Kubernetes, the atomic unit of an application is a Pod: one or more containers
that _must_ be scheduled onto the same host.
The Service is used for DNS lookups between Cassandra Pods, and Cassandra clients
within the Kubernetes Cluster.
Here is the service description:
<!-- BEGIN MUNGE: EXAMPLE cassandra-service.yaml -->
```yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra
spec:
clusterIP: None
ports:
- port: 9042
selector:
app: cassandra
```
[Download example](cassandra-service.yaml?raw=true)
<!-- END MUNGE: EXAMPLE cassandra-service.yaml -->
Create the service for the StatefulSet:
```console
$ kubectl create -f examples/storage/cassandra/cassandra-service.yaml
```
The following command shows if the service has been created.
```console
$ kubectl get svc cassandra
```
The response should be like:
```console
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cassandra None <none> 9042/TCP 45s
```
If an error is returned the service create failed.
## Step 2: Use a StatefulSet to create Cassandra Ring
StatefulSets (previously PetSets) are a feature that was upgraded to a <i>Beta</i> component in
Kubernetes 1.5. Deploying stateful distributed applications, like Cassandra, within a clustered
environment can be challenging. We implemented StatefulSet to greatly simplify this
process. Multiple StatefulSet features are used within this example, but is out of
scope of this documentation. [Please refer to the Stateful Set documentation.](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/)
The StatefulSet manifest that is included below, creates a Cassandra ring that consists
of three pods.
This example includes using a GCE Storage Class, please update appropriately depending
on the cloud you are working with.
<!-- BEGIN MUNGE: EXAMPLE cassandra-statefulset.yaml -->
```yaml
apiVersion: "apps/v1beta1"
kind: StatefulSet
metadata:
name: cassandra
spec:
serviceName: cassandra
replicas: 3
template:
metadata:
labels:
app: cassandra
spec:
containers:
- name: cassandra
image: gcr.io/google-samples/cassandra:v12
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
resources:
limits:
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "PID=$(pidof java) && kill $PID && while ps -p $PID > /dev/null; do sleep 1; done"]
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEEDS
value: "cassandra-0.cassandra.default.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "K8Demo"
- name: CASSANDRA_DC
value: "DC1-K8Demo"
- name: CASSANDRA_RACK
value: "Rack1-K8Demo"
- name: CASSANDRA_AUTO_BOOTSTRAP
value: "false"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- /ready-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 5
# These volume mounts are persistent. They are like inline claims,
# but not exactly because the names need to match exactly one of
# the stateful pod volumes.
volumeMounts:
- name: cassandra-data
mountPath: /cassandra_data
# These are converted to volume claims by the controller
# and mounted at the paths mentioned above.
# do not use these in production until ssd GCEPersistentDisk or other ssd pd
volumeClaimTemplates:
- metadata:
name: cassandra-data
annotations:
volume.beta.kubernetes.io/storage-class: fast
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
```
[Download example](cassandra-statefulset.yaml?raw=true)
<!-- END MUNGE: EXAMPLE cassandra-statefulset.yaml -->
Create the Cassandra StatefulSet as follows:
```console
$ kubectl create -f examples/storage/cassandra/cassandra-statefulset.yaml
```
## Step 3: Validate and Modify The Cassandra StatefulSet
Deploying this StatefulSet shows off two of the new features that StatefulSets provides.
1. The pod names are known
2. The pods deploy in incremental order
First validate that the StatefulSet has deployed, by running `kubectl` command below.
```console
$ kubectl get statefulset cassandra
```
The command should respond like:
```console
NAME DESIRED CURRENT AGE
cassandra 3 3 13s
```
Next watch the Cassandra pods deploy, one after another. The StatefulSet resource
deploys pods in a number fashion: 1, 2, 3, etc. If you execute the following
command before the pods deploy you are able to see the ordered creation.
```console
$ kubectl get pods -l="app=cassandra"
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 1m
cassandra-1 0/1 ContainerCreating 0 8s
```
The above example shows two of the three pods in the Cassandra StatefulSet deployed.
Once all of the pods are deployed the same command will respond with the full
StatefulSet.
```console
$ kubectl get pods -l="app=cassandra"
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 10m
cassandra-1 1/1 Running 0 9m
cassandra-2 1/1 Running 0 8m
```
Running the Cassandra utility `nodetool` will display the status of the ring.
```console
$ kubectl exec cassandra-0 -- nodetool status
Datacenter: DC1-K8Demo
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 10.4.2.4 65.26 KiB 32 63.7% a9d27f81-6783-461d-8583-87de2589133e Rack1-K8Demo
UN 10.4.0.4 102.04 KiB 32 66.7% 5559a58c-8b03-47ad-bc32-c621708dc2e4 Rack1-K8Demo
UN 10.4.1.4 83.06 KiB 32 69.6% 9dce943c-581d-4c0e-9543-f519969cc805 Rack1-K8Demo
```
You can also run `cqlsh` to describe the keyspaces in the cluster.
```console
$ kubectl exec cassandra-0 -- cqlsh -e 'desc keyspaces'
system_traces system_schema system_auth system system_distributed
```
In order to increase or decrease the size of the Cassandra StatefulSet, you must use
`kubectl edit`. You can find more information about the edit command in the [documentation](https://kubernetes.io/docs/user-guide/kubectl/kubectl_edit.md).
Use the following command to edit the StatefulSet.
```console
$ kubectl edit statefulset cassandra
```
This will create an editor in your terminal. The line you are looking to change is
`replicas`. The example does on contain the entire contents of the terminal window, and
the last line of the example below is the replicas line that you want to change.
```console
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
creationTimestamp: 2016-08-13T18:40:58Z
generation: 1
labels:
app: cassandra
name: cassandra
namespace: default
resourceVersion: "323"
selfLink: /apis/apps/v1beta1/namespaces/default/statefulsets/cassandra
uid: 7a219483-6185-11e6-a910-42010a8a0fc0
spec:
replicas: 3
```
Modify the manifest to the following, and save the manifest.
```console
spec:
replicas: 4
```
The StatefulSet will now contain four pods.
```console
$ kubectl get statefulset cassandra
```
The command should respond like:
```console
NAME DESIRED CURRENT AGE
cassandra 4 4 36m
```
For the Kubernetes 1.5 release, the beta StatefulSet resource does not have `kubectl scale`
functionality, like a Deployment, ReplicaSet, Replication Controller, or Job.
## Step 4: Delete Cassandra StatefulSet
Deleting and/or scaling a StatefulSet down will not delete the volumes associated with the StatefulSet. This is done to ensure safety first, your data is more valuable than an auto purge of all related StatefulSet resources. Deleting the Persistent Volume Claims may result in a deletion of the associated volumes, depending on the storage class and reclaim policy. You should never assume ability to access a volume after claim deletion.
Use the following commands to delete the StatefulSet.
```console
$ grace=$(kubectl get po cassandra-0 --template '{{.spec.terminationGracePeriodSeconds}}') \
&& kubectl delete statefulset -l app=cassandra \
&& echo "Sleeping $grace" \
&& sleep $grace \
&& kubectl delete pvc -l app=cassandra
```
## Step 5: Use a Replication Controller to create Cassandra node pods
A Kubernetes
_[Replication Controller](https://kubernetes.io/docs/user-guide/replication-controller.md)_
is responsible for replicating sets of identical pods. Like a
Service, it has a selector query which identifies the members of its set.
Unlike a Service, it also has a desired number of replicas, and it will create
or delete Pods to ensure that the number of Pods matches up with its
desired state.
The Replication Controller, in conjunction with the Service we just defined,
will let us easily build a replicated, scalable Cassandra cluster.
Let's create a replication controller with two initial replicas.
<!-- BEGIN MUNGE: EXAMPLE cassandra-controller.yaml -->
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: cassandra
# The labels will be applied automatically
# from the labels in the pod template, if not set
# labels:
# app: cassandra
spec:
replicas: 2
# The selector will be applied automatically
# from the labels in the pod template, if not set.
# selector:
# app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
containers:
- command:
- /run.sh
resources:
limits:
cpu: 0.5
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEED_PROVIDER
value: "io.k8s.cassandra.KubernetesSeedProvider"
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: gcr.io/google-samples/cassandra:v12
name: cassandra
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
volumeMounts:
- mountPath: /cassandra_data
name: data
volumes:
- name: data
emptyDir: {}
```
[Download example](cassandra-controller.yaml?raw=true)
<!-- END MUNGE: EXAMPLE cassandra-controller.yaml -->
There are a few things to note in this description.
The `selector` attribute contains the controller's selector query. It can be
explicitly specified, or applied automatically from the labels in the pod
template if not set, as is done here.
The pod template's label, `app:cassandra`, matches the Service selector
from Step 1. This is how pods created by this replication controller are picked up
by the Service."
The `replicas` attribute specifies the desired number of replicas, in this
case 2 initially. We'll scale up to more shortly.
Create the Replication Controller:
```console
$ kubectl create -f examples/storage/cassandra/cassandra-controller.yaml
```
You can list the new controller:
```console
$ kubectl get rc -o wide
NAME DESIRED CURRENT AGE CONTAINER(S) IMAGE(S) SELECTOR
cassandra 2 2 11s cassandra gcr.io/google-samples/cassandra:v12 app=cassandra
```
Now if you list the pods in your cluster, and filter to the label
`app=cassandra`, you should see two Cassandra pods. (The `wide` argument lets
you see which Kubernetes nodes the pods were scheduled onto.)
```console
$ kubectl get pods -l="app=cassandra" -o wide
NAME READY STATUS RESTARTS AGE NODE
cassandra-21qyy 1/1 Running 0 1m kubernetes-minion-b286
cassandra-q6sz7 1/1 Running 0 1m kubernetes-minion-9ye5
```
Because these pods have the label `app=cassandra`, they map to the service we
defined in Step 1.
You can check that the Pods are visible to the Service using the following service endpoints query:
```console
$ kubectl get endpoints cassandra -o yaml
apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: 2015-06-21T22:34:12Z
labels:
app: cassandra
name: cassandra
namespace: default
resourceVersion: "944373"
selfLink: /api/v1/namespaces/default/endpoints/cassandra
uid: a3d6c25f-1865-11e5-a34e-42010af01bcc
subsets:
- addresses:
- ip: 10.244.3.15
targetRef:
kind: Pod
name: cassandra
namespace: default
resourceVersion: "944372"
uid: 9ef9895d-1865-11e5-a34e-42010af01bcc
ports:
- port: 9042
protocol: TCP
```
To show that the `SeedProvider` logic is working as intended, you can use the
`nodetool` command to examine the status of the Cassandra cluster. To do this,
use the `kubectl exec` command, which lets you run `nodetool` in one of your
Cassandra pods. Again, substitute `cassandra-xxxxx` with the actual name of one
of your pods.
```console
$ kubectl exec -ti cassandra-xxxxx -- nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 10.244.0.5 74.09 KB 256 100.0% 86feda0f-f070-4a5b-bda1-2eeb0ad08b77 rack1
UN 10.244.3.3 51.28 KB 256 100.0% dafe3154-1d67-42e1-ac1d-78e7e80dce2b rack1
```
## Step 6: Scale up the Cassandra cluster
Now let's scale our Cassandra cluster to 4 pods. We do this by telling the
Replication Controller that we now want 4 replicas.
```sh
$ kubectl scale rc cassandra --replicas=4
```
You can see the new pods listed:
```console
$ kubectl get pods -l="app=cassandra" -o wide
NAME READY STATUS RESTARTS AGE NODE
cassandra-21qyy 1/1 Running 0 6m kubernetes-minion-b286
cassandra-81m2l 1/1 Running 0 47s kubernetes-minion-b286
cassandra-8qoyp 1/1 Running 0 47s kubernetes-minion-9ye5
cassandra-q6sz7 1/1 Running 0 6m kubernetes-minion-9ye5
```
In a few moments, you can examine the Cassandra cluster status again, and see
that the new pods have been detected by the custom `SeedProvider`:
```console
$ kubectl exec -ti cassandra-xxxxx -- nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 10.244.0.6 51.67 KB 256 48.9% d07b23a5-56a1-4b0b-952d-68ab95869163 rack1
UN 10.244.1.5 84.71 KB 256 50.7% e060df1f-faa2-470c-923d-ca049b0f3f38 rack1
UN 10.244.1.6 84.71 KB 256 47.0% 83ca1580-4f3c-4ec5-9b38-75036b7a297f rack1
UN 10.244.0.5 68.2 KB 256 53.4% 72ca27e2-c72c-402a-9313-1e4b61c2f839 rack1
```
## Step 7: Delete the Replication Controller
Before you start Step 5, __delete the replication controller__ you created above:
```sh
$ kubectl delete rc cassandra
```
## Step 8: Use a DaemonSet instead of a Replication Controller
In Kubernetes, a [_Daemon Set_](https://kubernetes.io/docs/admin/daemons.md) can distribute pods
onto Kubernetes nodes, one-to-one. Like a _ReplicationController_, it has a
selector query which identifies the members of its set. Unlike a
_ReplicationController_, it has a node selector to limit which nodes are
scheduled with the templated pods, and replicates not based on a set target
number of pods, but rather assigns a single pod to each targeted node.
An example use case: when deploying to the cloud, the expectation is that
instances are ephemeral and might die at any time. Cassandra is built to
replicate data across the cluster to facilitate data redundancy, so that in the
case that an instance dies, the data stored on the instance does not, and the
cluster can react by re-replicating the data to other running nodes.
`DaemonSet` is designed to place a single pod on each node in the Kubernetes
cluster. That will give us data redundancy. Let's create a
DaemonSet to start our storage cluster:
<!-- BEGIN MUNGE: EXAMPLE cassandra-daemonset.yaml -->
```yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
name: cassandra
name: cassandra
spec:
template:
metadata:
labels:
app: cassandra
spec:
# Filter to specific nodes:
# nodeSelector:
# app: cassandra
containers:
- command:
- /run.sh
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEED_PROVIDER
value: "io.k8s.cassandra.KubernetesSeedProvider"
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: gcr.io/google-samples/cassandra:v12
name: cassandra
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
# If you need it it is going away in C* 4.0
#- containerPort: 9160
# name: thrift
resources:
requests:
cpu: 0.5
volumeMounts:
- mountPath: /cassandra_data
name: data
volumes:
- name: data
emptyDir: {}
```
[Download example](cassandra-daemonset.yaml?raw=true)
<!-- END MUNGE: EXAMPLE cassandra-daemonset.yaml -->
Most of this DaemonSet definition is identical to the ReplicationController
definition above; it simply gives the daemon set a recipe to use when it creates
new Cassandra pods, and targets all Cassandra nodes in the cluster.
Differentiating aspects are the `nodeSelector` attribute, which allows the
DaemonSet to target a specific subset of nodes (you can label nodes just like
other resources), and the lack of a `replicas` attribute due to the 1-to-1 node-
pod relationship.
Create this DaemonSet:
```console
$ kubectl create -f examples/storage/cassandra/cassandra-daemonset.yaml
```
You may need to disable config file validation, like so:
```console
$ kubectl create -f examples/storage/cassandra/cassandra-daemonset.yaml --validate=false
```
You can see the DaemonSet running:
```console
$ kubectl get daemonset
NAME DESIRED CURRENT NODE-SELECTOR
cassandra 3 3 <none>
```
Now, if you list the pods in your cluster, and filter to the label
`app=cassandra`, you should see one (and only one) new cassandra pod for each
node in your network.
```console
$ kubectl get pods -l="app=cassandra" -o wide
NAME READY STATUS RESTARTS AGE NODE
cassandra-ico4r 1/1 Running 0 4s kubernetes-minion-rpo1
cassandra-kitfh 1/1 Running 0 1s kubernetes-minion-9ye5
cassandra-tzw89 1/1 Running 0 2s kubernetes-minion-b286
```
To prove that this all worked as intended, you can again use the `nodetool`
command to examine the status of the cluster. To do this, use the `kubectl
exec` command to run `nodetool` in one of your newly-launched cassandra pods.
```console
$ kubectl exec -ti cassandra-xxxxx -- nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 10.244.0.5 74.09 KB 256 100.0% 86feda0f-f070-4a5b-bda1-2eeb0ad08b77 rack1
UN 10.244.4.2 32.45 KB 256 100.0% 0b1be71a-6ffb-4895-ac3e-b9791299c141 rack1
UN 10.244.3.3 51.28 KB 256 100.0% dafe3154-1d67-42e1-ac1d-78e7e80dce2b rack1
```
**Note**: This example had you delete the cassandra Replication Controller before
you created the DaemonSet. This is because to keep this example simple the
RC and the DaemonSet are using the same `app=cassandra` label (so that their pods map to the
service we created, and so that the SeedProvider can identify them).
If we didn't delete the RC first, the two resources would conflict with
respect to how many pods they wanted to have running. If we wanted, we could support running
both together by using additional labels and selectors.
## Step 9: Resource Cleanup
When you are ready to take down your resources, do the following:
```console
$ kubectl delete service -l app=cassandra
$ kubectl delete daemonset cassandra
```
### Custom Seed Provider
A custom [`SeedProvider`](https://svn.apache.org/repos/asf/cassandra/trunk/src/java/org/apache/cassandra/locator/SeedProvider.java)
is included for running Cassandra on top of Kubernetes. Only when you deploy Cassandra
via a replication control or a daemonset, you will need to use the custom seed provider.
In Cassandra, a `SeedProvider` bootstraps the gossip protocol that Cassandra uses to find other
Cassandra nodes. Seed addresses are hosts deemed as contact points. Cassandra
instances use the seed list to find each other and learn the topology of the
ring. The [`KubernetesSeedProvider`](java/src/main/java/io/k8s/cassandra/KubernetesSeedProvider.java)
discovers Cassandra seeds IP addresses via the Kubernetes API, those Cassandra
instances are defined within the Cassandra Service.
Refer to the custom seed provider [README](java/README.md) for further
`KubernetesSeedProvider` configurations. For this example you should not need
to customize the Seed Provider configurations.
See the [image](image/) directory of this example for specifics on
how the container docker image was built and what it contains.
You may also note that we are setting some Cassandra parameters (`MAX_HEAP_SIZE`
and `HEAP_NEWSIZE`), and adding information about the
[namespace](https://kubernetes.io/docs/user-guide/namespaces.md).
We also tell Kubernetes that the container exposes
both the `CQL` and `Thrift` API ports. Finally, we tell the cluster
manager that we need 0.1 cpu (0.1 core).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/storage/cassandra/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/cassandra/README.md](https://github.com/kubernetes/examples/blob/master/cassandra/README.md)

View File

@ -1,34 +1 @@
# Cassandra on Kubernetes Custom Seed Provider: releases.k8s.io/HEAD
Within any deployment of Cassandra a Seed Provider is used to for node discovery and communication. When a Cassandra node first starts it must discover which nodes, or seeds, for the information about the Cassandra nodes in the ring / rack / datacenter.
This Java project provides a custom Seed Provider which communicates with the Kubernetes API to discover the required information. This provider is bundled with the Docker provided in this example.
# Configuring the Seed Provider
The following environment variables may be used to override the default configurations:
| ENV VAR | DEFAULT VALUE | NOTES |
| ------------- |:-------------: |:-------------:|
| KUBERNETES_PORT_443_TCP_ADDR | kubernetes.default.svc.cluster.local | The hostname of the API server |
| KUBERNETES_PORT_443_TCP_PORT | 443 | API port number |
| CASSANDRA_SERVICE | cassandra | Default service name for lookup |
| POD_NAMESPACE | default | Default pod service namespace |
| K8S_ACCOUNT_TOKEN | /var/run/secrets/kubernetes.io/serviceaccount/token | Default path to service token |
# Using
If no endpoints are discovered from the API the seeds configured in the cassandra.yaml file are used.
# Provider limitations
This Cassandra Provider implements `SeedProvider`. and utilizes `SimpleSnitch`. This limits a Cassandra Ring to a single Cassandra Datacenter and ignores Rack setup. Datastax provides more documentation on the use of [_SNITCHES_](https://docs.datastax.com/en/cassandra/3.x/cassandra/architecture/archSnitchesAbout.html). Further development is planned to
expand this capability.
This in affect makes every node a seed provider, which is not a recommended best practice. This increases maintenance and reduces gossip performance.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/storage/cassandra/java/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/storage/cassandra/java/README.md](https://github.com/kubernetes/examples/blob/master/staging/storage/cassandra/java/README.md)

View File

@ -1,233 +1 @@
## Cloud Native Deployments of Hazelcast using Kubernetes
The following document describes the development of a _cloud native_ [Hazelcast](http://hazelcast.org/) deployment on Kubernetes. When we say _cloud native_ we mean an application which understands that it is running within a cluster manager, and uses this cluster management infrastructure to help implement the application. In particular, in this instance, a custom Hazelcast ```bootstrapper``` is used to enable Hazelcast to dynamically discover Hazelcast nodes that have already joined the cluster.
Any topology changes are communicated and handled by Hazelcast nodes themselves.
This document also attempts to describe the core components of Kubernetes: _Pods_, _Services_, and _Deployments_.
### Prerequisites
This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the `kubectl` command line tool somewhere in your path. Please see the [getting started](https://kubernetes.io/docs/getting-started-guides/) for installation instructions for your platform.
### A note for the impatient
This is a somewhat long tutorial. If you want to jump straight to the "do it now" commands, please see the [tl; dr](#tl-dr) at the end.
### Sources
Source is freely available at:
* Hazelcast Discovery - https://github.com/pires/hazelcast-kubernetes-bootstrapper
* Dockerfile - https://github.com/pires/hazelcast-kubernetes
* Docker Trusted Build - https://quay.io/repository/pires/hazelcast-kubernetes
### Simple Single Pod Hazelcast Node
In Kubernetes, the atomic unit of an application is a [_Pod_](https://kubernetes.io/docs/user-guide/pods.md). A Pod is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes.
In this case, we shall not run a single Hazelcast pod, because the discovery mechanism now relies on a service definition.
### Adding a Hazelcast Service
In Kubernetes a _[Service](https://kubernetes.io/docs/user-guide/services.md)_ describes a set of Pods that perform the same task. For example, the set of nodes in a Hazelcast cluster. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods available via the Kubernetes API. This is actually how our discovery mechanism works, by relying on the service to discover other Hazelcast pods.
Here is the service description:
<!-- BEGIN MUNGE: EXAMPLE hazelcast-service.yaml -->
```yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: hazelcast
name: hazelcast
spec:
ports:
- port: 5701
selector:
name: hazelcast
```
[Download example](hazelcast-service.yaml?raw=true)
<!-- END MUNGE: EXAMPLE hazelcast-service.yaml -->
The important thing to note here is the `selector`. It is a query over labels, that identifies the set of _Pods_ contained by the _Service_. In this case the selector is `name: hazelcast`. If you look at the Replication Controller specification below, you'll see that the pod has the corresponding label, so it will be selected for membership in this Service.
Create this service as follows:
```sh
$ kubectl create -f examples/storage/hazelcast/hazelcast-service.yaml
```
### Adding replicated nodes
The real power of Kubernetes and Hazelcast lies in easily building a replicated, resizable Hazelcast cluster.
In Kubernetes a _[_Deployment_](https://kubernetes.io/docs/user-guide/deployments.md)_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of its set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with its desired state.
Deployments will "adopt" existing pods that match their selector query, so let's create a Deployment with a single replica to adopt our existing Hazelcast Pod.
<!-- BEGIN MUNGE: EXAMPLE hazelcast-controller.yaml -->
```yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hazelcast
labels:
name: hazelcast
spec:
template:
metadata:
labels:
name: hazelcast
spec:
containers:
- name: hazelcast
image: quay.io/pires/hazelcast-kubernetes:0.8.0
imagePullPolicy: Always
env:
- name: "DNS_DOMAIN"
value: "cluster.local"
ports:
- name: hazelcast
containerPort: 5701
```
[Download example](hazelcast-deployment.yaml?raw=true)
<!-- END MUNGE: EXAMPLE hazelcast-controller.yaml -->
You may note that we tell Kubernetes that the container exposes the `hazelcast` port.
The bulk of the replication controller config is actually identical to the Hazelcast pod declaration above, it simply gives the controller a recipe to use when creating new pods. The other parts are the `selector` which contains the controller's selector query, and the `replicas` parameter which specifies the desired number of replicas, in this case 1.
Last but not least, we set `DNS_DOMAIN` environment variable according to your Kubernetes clusters DNS configuration.
Create this controller:
```sh
$ kubectl create -f examples/storage/hazelcast/hazelcast-deployment.yaml
```
After the controller provisions successfully the pod, you can query the service endpoints:
```sh
$ kubectl get endpoints hazelcast -o yaml
apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: 2017-03-15T09:40:11Z
labels:
name: hazelcast
name: hazelcast
namespace: default
resourceVersion: "65060"
selfLink: /api/v1/namespaces/default/endpoints/hazelcast
uid: 62645b71-0963-11e7-b39c-080027985ce6
subsets:
- addresses:
- ip: 172.17.0.2
nodeName: minikube
targetRef:
kind: Pod
name: hazelcast-4195412960-mgqtk
namespace: default
resourceVersion: "65058"
uid: 7043708f-0963-11e7-b39c-080027985ce6
ports:
- port: 5701
protocol: TCP
```
You can see that the _Service_ has found the pod created by the replication controller.
Now it gets even more interesting. Let's scale our cluster to 2 pods:
```sh
$ kubectl scale deployment hazelcast --replicas 2
```
Now if you list the pods in your cluster, you should see two hazelcast pods:
```sh
$ kubectl get deployment,pods
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/hazelcast 2 2 2 2 2m
NAME READY STATUS RESTARTS AGE
po/hazelcast-4195412960-0tl3w 1/1 Running 0 7s
po/hazelcast-4195412960-mgqtk 1/1 Running 0 2m
```
To prove that this all works, you can use the `log` command to examine the logs of one pod, for example:
```sh
kubectl logs -f hazelcast-4195412960-0tl3w
2017-03-15 09:42:45.046 INFO 7 --- [ main] com.github.pires.hazelcast.Application : Starting Application on hazelcast-4195412960-0tl3w with PID 7 (/bootstrapper.jar started by root in /)
2017-03-15 09:42:45.060 INFO 7 --- [ main] com.github.pires.hazelcast.Application : No active profile set, falling back to default profiles: default
2017-03-15 09:42:45.128 INFO 7 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@14514713: startup date [Wed Mar 15 09:42:45 GMT 2017]; root of context hierarchy
2017-03-15 09:42:45.989 INFO 7 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2017-03-15 09:42:46.001 INFO 7 --- [ main] c.g.p.h.HazelcastDiscoveryController : Asking k8s registry at https://kubernetes.default.svc.cluster.local..
2017-03-15 09:42:46.376 INFO 7 --- [ main] c.g.p.h.HazelcastDiscoveryController : Found 2 pods running Hazelcast.
2017-03-15 09:42:46.458 INFO 7 --- [ main] c.h.instance.DefaultAddressPicker : [LOCAL] [someGroup] [3.8] Interfaces is disabled, trying to pick one address from TCP-IP config addresses: [172.17.0.6, 172.17.0.2]
2017-03-15 09:42:46.458 INFO 7 --- [ main] c.h.instance.DefaultAddressPicker : [LOCAL] [someGroup] [3.8] Prefer IPv4 stack is true.
2017-03-15 09:42:46.464 INFO 7 --- [ main] c.h.instance.DefaultAddressPicker : [LOCAL] [someGroup] [3.8] Picked [172.17.0.6]:5701, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5701], bind any local is true
2017-03-15 09:42:46.484 INFO 7 --- [ main] com.hazelcast.system : [172.17.0.6]:5701 [someGroup] [3.8] Hazelcast 3.8 (20170217 - d7998b4) starting at [172.17.0.6]:5701
2017-03-15 09:42:46.484 INFO 7 --- [ main] com.hazelcast.system : [172.17.0.6]:5701 [someGroup] [3.8] Copyright (c) 2008-2017, Hazelcast, Inc. All Rights Reserved.
2017-03-15 09:42:46.485 INFO 7 --- [ main] com.hazelcast.system : [172.17.0.6]:5701 [someGroup] [3.8] Configured Hazelcast Serialization version : 1
2017-03-15 09:42:46.679 INFO 7 --- [ main] c.h.s.i.o.impl.BackpressureRegulator : [172.17.0.6]:5701 [someGroup] [3.8] Backpressure is disabled
2017-03-15 09:42:47.069 INFO 7 --- [ main] com.hazelcast.instance.Node : [172.17.0.6]:5701 [someGroup] [3.8] Creating TcpIpJoiner
2017-03-15 09:42:47.182 INFO 7 --- [ main] c.h.s.i.o.impl.OperationExecutorImpl : [172.17.0.6]:5701 [someGroup] [3.8] Starting 2 partition threads
2017-03-15 09:42:47.189 INFO 7 --- [ main] c.h.s.i.o.impl.OperationExecutorImpl : [172.17.0.6]:5701 [someGroup] [3.8] Starting 3 generic threads (1 dedicated for priority tasks)
2017-03-15 09:42:47.197 INFO 7 --- [ main] com.hazelcast.core.LifecycleService : [172.17.0.6]:5701 [someGroup] [3.8] [172.17.0.6]:5701 is STARTING
2017-03-15 09:42:47.253 INFO 7 --- [cached.thread-3] c.hazelcast.nio.tcp.InitConnectionTask : [172.17.0.6]:5701 [someGroup] [3.8] Connecting to /172.17.0.2:5701, timeout: 0, bind-any: true
2017-03-15 09:42:47.262 INFO 7 --- [cached.thread-3] c.h.nio.tcp.TcpIpConnectionManager : [172.17.0.6]:5701 [someGroup] [3.8] Established socket connection between /172.17.0.6:58073 and /172.17.0.2:5701
2017-03-15 09:42:54.260 INFO 7 --- [ration.thread-0] com.hazelcast.system : [172.17.0.6]:5701 [someGroup] [3.8] Cluster version set to 3.8
2017-03-15 09:42:54.262 INFO 7 --- [ration.thread-0] c.h.internal.cluster.ClusterService : [172.17.0.6]:5701 [someGroup] [3.8]
Members [2] {
Member [172.17.0.2]:5701 - 170f6924-7888-442a-9875-ad4d25659a8a
Member [172.17.0.6]:5701 - b1b82bfa-86c2-4931-af57-325c10c03b3b this
}
2017-03-15 09:42:56.285 INFO 7 --- [ main] com.hazelcast.core.LifecycleService : [172.17.0.6]:5701 [someGroup] [3.8] [172.17.0.6]:5701 is STARTED
2017-03-15 09:42:56.287 INFO 7 --- [ main] com.github.pires.hazelcast.Application : Started Application in 11.831 seconds (JVM running for 12.219)
```
Now let's scale our cluster to 4 nodes:
```sh
$ kubectl scale deployment hazelcast --replicas 4
```
Examine the status again by checking a node's logs and you should see the 4 members connected. Something like:
```
(...)
Members [4] {
Member [172.17.0.2]:5701 - 170f6924-7888-442a-9875-ad4d25659a8a
Member [172.17.0.6]:5701 - b1b82bfa-86c2-4931-af57-325c10c03b3b this
Member [172.17.0.9]:5701 - 0c7530d3-1b5a-4f40-bd59-7187e43c1110
Member [172.17.0.10]:5701 - ad5c3000-7fd0-4ce7-8194-e9b1c2ed6dda
}
```
### tl; dr;
For those of you who are impatient, here is the summary of the commands we ran in this tutorial.
```sh
kubectl create -f service.yaml
kubectl create -f deployment.yaml
kubectl scale deployment hazelcast --replicas 2
kubectl scale deployment hazelcast --replicas 4
```
### Hazelcast Discovery Source
See [here](https://github.com/pires/hazelcast-kubernetes-bootstrapper/blob/master/src/main/java/com/github/pires/hazelcast/HazelcastDiscoveryController.java)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/storage/hazelcast/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/storage/hazelcast/README.md](https://github.com/kubernetes/examples/blob/master/staging/storage/hazelcast/README.md)

View File

@ -1,341 +1 @@
# Cloud Native Deployment of Minio using Kubernetes
## Table of Contents
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Minio Standalone Server Deployment](#minio-standalone-server-deployment)
- [Standalone Quickstart](#standalone-quickstart)
- [Step 1: Create Persistent Volume Claim](#step-1-create-persistent-volume-claim)
- [Step 2: Create Deployment](#step-2-create-minio-deployment)
- [Step 3: Create LoadBalancer Service](#step-3-create-minio-service)
- [Step 4: Resource cleanup](#step-4-resource-cleanup)
- [Minio Distributed Server Deployment](#minio-distributed-server-deployment)
- [Distributed Quickstart](#distributed-quickstart)
- [Step 1: Create Minio Headless Service](#step-1-create-minio-headless-service)
- [Step 2: Create Minio Statefulset](#step-2-create-minio-statefulset)
- [Step 3: Create LoadBalancer Service](#step-3-create-minio-service)
- [Step 4: Resource cleanup](#step-4-resource-cleanup)
## Introduction
Minio is an AWS S3 compatible, object storage server built for cloud applications and devops. Minio is _cloud native_, meaning Minio understands that it is running within a cluster manager, and uses the cluster management infrastructure for allocation of compute and storage resources.
## Prerequisites
This example assumes that you have a Kubernetes version >=1.4 cluster installed and running, and that you have installed the [`kubectl`](https://kubernetes.io/docs/tasks/kubectl/install/) command line tool in your path. Please see the
[getting started guides](https://kubernetes.io/docs/getting-started-guides/) for installation instructions for your platform.
## Minio Standalone Server Deployment
The following section describes the process to deploy standalone [Minio](https://minio.io/) server on Kubernetes. The deployment uses the [official Minio Docker image](https://hub.docker.com/r/minio/minio/~/dockerfile/) from Docker Hub.
This section uses following core components of Kubernetes:
- [_Pods_](https://kubernetes.io/docs/user-guide/pods/)
- [_Services_](https://kubernetes.io/docs/user-guide/services/)
- [_Deployments_](https://kubernetes.io/docs/user-guide/deployments/)
- [_Persistent Volume Claims_](https://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims)
### Standalone Quickstart
Run the below commands to get started quickly
```sh
kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-standalone-pvc.yaml?raw=true
kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-standalone-deployment.yaml?raw=true
kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-standalone-service.yaml?raw=true
```
### Step 1: Create Persistent Volume Claim
Minio needs persistent storage to store objects. If there is no
persistent storage, the data stored in Minio instance will be stored in the container file system and will be wiped off as soon as the container restarts.
Create a persistent volume claim (PVC) to request storage for the Minio instance. Kubernetes looks out for PVs matching the PVC request in the cluster and binds it to the PVC automatically.
This is the PVC description.
```sh
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# This name uniquely identifies the PVC. Will be used in deployment below.
name: minio-pv-claim
annotations:
volume.alpha.kubernetes.io/storage-class: anything
labels:
app: minio-storage-claim
spec:
# Read more about access modes here: http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes
accessModes:
- ReadWriteOnce
resources:
# This is the request for storage. Should be available in the cluster.
requests:
storage: 10Gi
```
Create the PersistentVolumeClaim
```sh
kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-standalone-pvc.yaml?raw=true
persistentvolumeclaim "minio-pv-claim" created
```
### Step 2: Create Minio Deployment
A deployment encapsulates replica sets and podsso, if a pod goes down, replication controller makes sure another pod comes up automatically. This way you wont need to bother about pod failures and will have a stable Minio service available.
This is the deployment description.
```sh
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# This name uniquely identifies the Deployment
name: minio-deployment
spec:
strategy:
type: Recreate
template:
metadata:
labels:
# Label is used as selector in the service.
app: minio
spec:
# Refer to the PVC created earlier
volumes:
- name: storage
persistentVolumeClaim:
# Name of the PVC created earlier
claimName: minio-pv-claim
containers:
- name: minio
# Pulls the default Minio image from Docker Hub
image: minio/minio:latest
args:
- server
- /storage
env:
# Minio access key and secret key
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
ports:
- containerPort: 9000
hostPort: 9000
# Mount the volume into the pod
volumeMounts:
- name: storage # must match the volume name, above
mountPath: "/storage"
```
Create the Deployment
```sh
kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-standalone-deployment.yaml?raw=true
deployment "minio-deployment" created
```
### Step 3: Create Minio Service
Now that you have a Minio deployment running, you may either want to access it internally (within the cluster) or expose it as a Service onto an external (outside of your cluster, maybe public internet) IP address, depending on your use case. You can achieve this using Services. There are 3 major service typesdefault type is ClusterIP, which exposes a service to connection from inside the cluster. NodePort and LoadBalancer are two types that expose services to external traffic.
In this example, we expose the Minio Deployment by creating a LoadBalancer service. This is the service description.
```sh
apiVersion: v1
kind: Service
metadata:
name: minio-service
spec:
type: LoadBalancer
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
app: minio
```
Create the Minio service
```sh
kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-standalone-service.yaml?raw=true
service "minio-service" created
```
The `LoadBalancer` service takes couple of minutes to launch. To check if the service was created successfully, run the command
```sh
kubectl get svc minio-service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
minio-service 10.55.248.23 104.199.249.165 9000:31852/TCP 1m
```
### Step 4: Resource cleanup
Once you are done, cleanup the cluster using
```sh
kubectl delete deployment minio-deployment \
&& kubectl delete pvc minio-pv-claim \
&& kubectl delete svc minio-service
```
## Minio Distributed Server Deployment
The following document describes the process to deploy [distributed Minio](https://docs.minio.io/docs/distributed-minio-quickstart-guide) server on Kubernetes. This example uses the [official Minio Docker image](https://hub.docker.com/r/minio/minio/~/dockerfile/) from Docker Hub.
This example uses following core components of Kubernetes:
- [_Pods_](https://kubernetes.io/docs/concepts/workloads/pods/pod/)
- [_Services_](https://kubernetes.io/docs/concepts/services-networking/service/)
- [_Statefulsets_](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/)
### Distributed Quickstart
Run the below commands to get started quickly
```sh
kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-distributed-headless-service.yaml?raw=true
kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-distributed-statefulset.yaml?raw=true
kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-distributed-service.yaml?raw=true
```
### Step 1: Create Minio Headless Service
Headless Service controls the domain within which StatefulSets are created. The domain managed by this Service takes the form: `$(service name).$(namespace).svc.cluster.local` (where “cluster.local” is the cluster domain), and the pods in this domain take the form: `$(pod-name-{i}).$(service name).$(namespace).svc.cluster.local`. This is required to get a DNS resolvable URL for each of the pods created within the Statefulset.
This is the Headless service description.
```sh
apiVersion: v1
kind: Service
metadata:
name: minio
labels:
app: minio
spec:
clusterIP: None
ports:
- port: 9000
name: minio
selector:
app: minio
```
Create the Headless Service
```sh
$ kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-distributed-headless-service.yaml?raw=true
service "minio" created
```
### Step 2: Create Minio Statefulset
A StatefulSet provides a deterministic name and a unique identity to each pod, making it easy to deploy stateful distributed applications. To launch distributed Minio you need to pass drive locations as parameters to the minio server command. Then, youll need to run the same command on all the participating pods. StatefulSets offer a perfect way to handle this requirement.
This is the Statefulset description.
```sh
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: minio
spec:
serviceName: minio
replicas: 4
template:
metadata:
annotations:
pod.alpha.kubernetes.io/initialized: "true"
labels:
app: minio
spec:
containers:
- name: minio
env:
- name: MINIO_ACCESS_KEY
value: "minio"
- name: MINIO_SECRET_KEY
value: "minio123"
image: minio/minio:latest
args:
- server
- http://minio-0.minio.default.svc.cluster.local/data
- http://minio-1.minio.default.svc.cluster.local/data
- http://minio-2.minio.default.svc.cluster.local/data
- http://minio-3.minio.default.svc.cluster.local/data
ports:
- containerPort: 9000
hostPort: 9000
# These volume mounts are persistent. Each pod in the Statefulset
# gets a volume mounted based on this field.
volumeMounts:
- name: data
mountPath: /data
# These are converted to volume claims by the controller
# and mounted at the paths mentioned above.
volumeClaimTemplates:
- metadata:
name: data
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
```
Create the Statefulset
```sh
$ kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-distributed-statefulset.yaml?raw=true
statefulset "minio" created
```
### Step 3: Create Minio Service
Now that you have a Minio statefulset running, you may either want to access it internally (within the cluster) or expose it as a Service onto an external (outside of your cluster, maybe public internet) IP address, depending on your use case. You can achieve this using Services. There are 3 major service typesdefault type is ClusterIP, which exposes a service to connection from inside the cluster. NodePort and LoadBalancer are two types that expose services to external traffic.
In this example, we expose the Minio Deployment by creating a LoadBalancer service. This is the service description.
```sh
apiVersion: v1
kind: Service
metadata:
name: minio-service
spec:
type: LoadBalancer
ports:
- port: 9000
targetPort: 9000
protocol: TCP
selector:
app: minio
```
Create the Minio service
```sh
$ kubectl create -f https://github.com/kubernetes/kubernetes/blob/master/examples/storage/minio/minio-distributed-service.yaml?raw=true
service "minio-service" created
```
The `LoadBalancer` service takes couple of minutes to launch. To check if the service was created successfully, run the command
```sh
$ kubectl get svc minio-service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
minio-service 10.55.248.23 104.199.249.165 9000:31852/TCP 1m
```
### Step 4: Resource cleanup
You can cleanup the cluster using
```sh
kubectl delete statefulset minio \
&& kubectl delete svc minio \
&& kubectl delete svc minio-service
```
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/storage/minio/README.md](https://github.com/kubernetes/examples/blob/master/staging/storage/minio/README.md)

View File

@ -1,137 +1 @@
## Galera Replication for MySQL on Kubernetes
This document explains a simple demonstration example of running MySQL synchronous replication using Galera, specifically, Percona XtraDB cluster. The example is simplistic and used a fixed number (3) of nodes but the idea can be built upon and made more dynamic as Kubernetes matures.
### Prerequisites
This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting started](https://kubernetes.io/docs/getting-started-guides/) for installation instructions for your platform.
Also, this example requires the image found in the ```image``` directory. For your convenience, it is built and available on Docker's public image repository as ```capttofu/percona_xtradb_cluster_5_6```. It can also be built which would merely require that the image in the pod or replication controller files is updated.
This example was tested on OS X with a Galera cluster running on VMWare using the fine repo developed by Paulo Pires [https://github.com/pires/kubernetes-vagrant-coreos-cluster] and client programs built for OS X.
### Basic concept
The basic idea is this: three replication controllers with a single pod, corresponding services, and a single overall service to connect to all three nodes. One of the important design goals of MySQL replication and/or clustering is that you don't want a single-point-of-failure, hence the need to distribute each node or slave across hosts or even geographical locations. Kubernetes is well-suited for facilitating this design pattern using the service and replication controller configuration files in this example.
By defaults, there are only three pods (hence replication controllers) for this cluster. This number can be increased using the variable NUM_NODES, specified in the replication controller configuration file. It's important to know the number of nodes must always be odd.
When the replication controller is created, it results in the corresponding container to start, run an entrypoint script that installs the MySQL system tables, set up users, and build up a list of servers that is used with the galera parameter ```wsrep_cluster_address```. This is a list of running nodes that galera uses for election of a node to obtain SST (Single State Transfer) from.
Note: Kubernetes best-practices is to pre-create the services for each controller, and the configuration files which contain the service and replication controller for each node, when created, will result in both a service and replication contrller running for the given node. An important thing to know is that it's important that initially pxc-node1.yaml be processed first and no other pxc-nodeN services that don't have corresponding replication controllers should exist. The reason for this is that if there is a node in ```wsrep_clsuter_address``` without a backing galera node there will be nothing to obtain SST from which will cause the node to shut itself down and the container in question to exit (and another soon relaunched, repeatedly).
First, create the overall cluster service that will be used to connect to the cluster:
```kubectl create -f examples/storage/mysql-galera/pxc-cluster-service.yaml```
Create the service and replication controller for the first node:
```kubectl create -f examples/storage/mysql-galera/pxc-node1.yaml```
### Create services and controllers for the remaining nodes
Repeat the same previous steps for ```pxc-node2``` and ```pxc-node3```
When complete, you should be able connect with a MySQL client to the IP address
service ```pxc-cluster``` to find a working cluster
### An example of creating a cluster
Shown below are examples of Using ```kubectl``` from within the ```./examples/storage/mysql-galera``` directory, the status of the lauched replication controllers and services can be confirmed
```
$ kubectl create -f examples/storage/mysql-galera/pxc-cluster-service.yaml
services/pxc-cluster
$ kubectl create -f examples/storage/mysql-galera/pxc-node1.yaml
services/pxc-node1
replicationcontrollers/pxc-node1
$ kubectl create -f examples/storage/mysql-galera/pxc-node2.yaml
services/pxc-node2
replicationcontrollers/pxc-node2
$ kubectl create -f examples/storage/mysql-galera/pxc-node3.yaml
services/pxc-node3
replicationcontrollers/pxc-node3
```
### Confirm a running cluster
Verify everything is running:
```
$ kubectl get rc,pods,services
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
pxc-node1 pxc-node1 capttofu/percona_xtradb_cluster_5_6:beta name=pxc-node1 1
pxc-node2 pxc-node2 capttofu/percona_xtradb_cluster_5_6:beta name=pxc-node2 1
pxc-node3 pxc-node3 capttofu/percona_xtradb_cluster_5_6:beta name=pxc-node3 1
NAME READY STATUS RESTARTS AGE
pxc-node1-h6fqr 1/1 Running 0 41m
pxc-node2-sfqm6 1/1 Running 0 41m
pxc-node3-017b3 1/1 Running 0 40m
NAME LABELS SELECTOR IP(S) PORT(S)
pxc-cluster <none> unit=pxc-cluster 10.100.179.58 3306/TCP
pxc-node1 <none> name=pxc-node1 10.100.217.202 3306/TCP
4444/TCP
4567/TCP
4568/TCP
pxc-node2 <none> name=pxc-node2 10.100.47.212 3306/TCP
4444/TCP
4567/TCP
4568/TCP
pxc-node3 <none> name=pxc-node3 10.100.200.14 3306/TCP
4444/TCP
4567/TCP
4568/TCP
```
The cluster should be ready for use!
### Connecting to the cluster
Using the name of ```pxc-cluster``` service running interactively using ```kubernetes exec```, it is possible to connect to any of the pods using the mysql client on the pod's container to verify the cluster size, which should be ```3```. In this example below, pxc-node3 replication controller is chosen, and to find out the pod name, ```kubectl get pods``` and ```awk``` are employed:
```
$ kubectl get pods|grep pxc-node3|awk '{ print $1 }'
pxc-node3-0b5mc
$ kubectl exec pxc-node3-0b5mc -i -t -- mysql -u root -p -h pxc-cluster
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.6.24-72.2-56-log Percona XtraDB Cluster (GPL), Release rel72.2, Revision 43abf03, WSREP version 25.11, wsrep_25.11
Copyright (c) 2009-2015 Percona LLC and/or its affiliates
Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show status like 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 3 |
+--------------------+-------+
1 row in set (0.06 sec)
```
At this point, there is a working cluster that can begin being used via the pxc-cluster service IP address!
### TODO
This setup certainly can become more fluid and dynamic. One idea is to perhaps use an etcd container to store information about node state. Originally, there was a read-only kubernetes API available to each container but that has since been removed. Also, Kelsey Hightower is working on moving the functionality of confd to Kubernetes. This could replace the shell duct tape that builds the cluster configuration file for the image.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/storage/mysql-galera/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/storage/mysql-galera/README.md](https://github.com/kubernetes/examples/blob/master/staging/storage/mysql-galera/README.md)

View File

@ -1,133 +1 @@
## Reliable, Scalable Redis on Kubernetes
The following document describes the deployment of a reliable, multi-node Redis on Kubernetes. It deploys a master with replicated slaves, as well as replicated redis sentinels which are use for health checking and failover.
### Prerequisites
This example assumes that you have a Kubernetes cluster installed and running, and that you have installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting started](https://kubernetes.io/docs/getting-started-guides/) for installation instructions for your platform.
### A note for the impatient
This is a somewhat long tutorial. If you want to jump straight to the "do it now" commands, please see the [tl; dr](#tl-dr) at the end.
### Turning up an initial master/sentinel pod.
A [_Pod_](https://kubernetes.io/docs/user-guide/pods.md) is one or more containers that _must_ be scheduled onto the same host. All containers in a pod share a network namespace, and may optionally share mounted volumes.
We will use the shared network namespace to bootstrap our Redis cluster. In particular, the very first sentinel needs to know how to find the master (subsequent sentinels just ask the first sentinel). Because all containers in a Pod share a network namespace, the sentinel can simply look at ```$(hostname -i):6379```.
Here is the config for the initial master and sentinel pod: [redis-master.yaml](redis-master.yaml)
Create this master as follows:
```sh
kubectl create -f examples/storage/redis/redis-master.yaml
```
### Turning up a sentinel service
In Kubernetes a [_Service_](https://kubernetes.io/docs/user-guide/services.md) describes a set of Pods that perform the same task. For example, the set of nodes in a Cassandra cluster, or even the single node we created above. An important use for a Service is to create a load balancer which distributes traffic across members of the set. But a _Service_ can also be used as a standing query which makes a dynamically changing set of Pods (or the single Pod we've already created) available via the Kubernetes API.
In Redis, we will use a Kubernetes Service to provide a discoverable endpoints for the Redis sentinels in the cluster. From the sentinels Redis clients can find the master, and then the slaves and other relevant info for the cluster. This enables new members to join the cluster when failures occur.
Here is the definition of the sentinel service: [redis-sentinel-service.yaml](redis-sentinel-service.yaml)
Create this service:
```sh
kubectl create -f examples/storage/redis/redis-sentinel-service.yaml
```
### Turning up replicated redis servers
So far, what we have done is pretty manual, and not very fault-tolerant. If the ```redis-master``` pod that we previously created is destroyed for some reason (e.g. a machine dying) our Redis service goes away with it.
In Kubernetes a [_Replication Controller_](https://kubernetes.io/docs/user-guide/replication-controller.md) is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
Replication Controllers will "adopt" existing pods that match their selector query, so let's create a Replication Controller with a single replica to adopt our existing Redis server. Here is the replication controller config: [redis-controller.yaml](redis-controller.yaml)
The bulk of this controller config is actually identical to the redis-master pod definition above. It forms the template or "cookie cutter" that defines what it means to be a member of this set.
Create this controller:
```sh
kubectl create -f examples/storage/redis/redis-controller.yaml
```
We'll do the same thing for the sentinel. Here is the controller config: [redis-sentinel-controller.yaml](redis-sentinel-controller.yaml)
We create it as follows:
```sh
kubectl create -f examples/storage/redis/redis-sentinel-controller.yaml
```
### Scale our replicated pods
Initially creating those pods didn't actually do anything, since we only asked for one sentinel and one redis server, and they already existed, nothing changed. Now we will add more replicas:
```sh
kubectl scale rc redis --replicas=3
```
```sh
kubectl scale rc redis-sentinel --replicas=3
```
This will create two additional replicas of the redis server and two additional replicas of the redis sentinel.
Unlike our original redis-master pod, these pods exist independently, and they use the ```redis-sentinel-service``` that we defined above to discover and join the cluster.
### Delete our manual pod
The final step in the cluster turn up is to delete the original redis-master pod that we created manually. While it was useful for bootstrapping discovery in the cluster, we really don't want the lifespan of our sentinel to be tied to the lifespan of one of our redis servers, and now that we have a successful, replicated redis sentinel service up and running, the binding is unnecessary.
Delete the master as follows:
```sh
kubectl delete pods redis-master
```
Now let's take a close look at what happens after this pod is deleted. There are three things that happen:
1. The redis replication controller notices that its desired state is 3 replicas, but there are currently only 2 replicas, and so it creates a new redis server to bring the replica count back up to 3
2. The redis-sentinel replication controller likewise notices the missing sentinel, and also creates a new sentinel.
3. The redis sentinels themselves, realize that the master has disappeared from the cluster, and begin the election procedure for selecting a new master. They perform this election and selection, and chose one of the existing redis server replicas to be the new master.
### Conclusion
At this point we now have a reliable, scalable Redis installation. By scaling the replication controller for redis servers, we can increase or decrease the number of read-slaves in our cluster. Likewise, if failures occur, the redis-sentinels will perform master election and select a new master.
**NOTE:** since redis 3.2 some security measures (bind to 127.0.0.1 and `--protected-mode`) are enabled by default. Please read about this in http://antirez.com/news/96
### tl; dr
For those of you who are impatient, here is the summary of commands we ran in this tutorial:
```
# Create a bootstrap master
kubectl create -f examples/storage/redis/redis-master.yaml
# Create a service to track the sentinels
kubectl create -f examples/storage/redis/redis-sentinel-service.yaml
# Create a replication controller for redis servers
kubectl create -f examples/storage/redis/redis-controller.yaml
# Create a replication controller for redis sentinels
kubectl create -f examples/storage/redis/redis-sentinel-controller.yaml
# Scale both replication controllers
kubectl scale rc redis --replicas=3
kubectl scale rc redis-sentinel --replicas=3
# Delete the original master pod
kubectl delete pods redis-master
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/storage/redis/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/storage/redis/README.md](https://github.com/kubernetes/examples/blob/master/staging/storage/redis/README.md)

View File

@ -1,130 +1 @@
RethinkDB Cluster on Kubernetes
==============================
Setting up a [rethinkdb](http://rethinkdb.com/) cluster on [kubernetes](http://kubernetes.io)
**Features**
* Auto configuration cluster by querying info from k8s
* Simple
Quick start
-----------
**Step 1**
Rethinkdb will discover its peer using endpoints provided by kubernetes service,
so first create a service so the following pod can query its endpoint
```sh
$kubectl create -f examples/storage/rethinkdb/driver-service.yaml
```
check out:
```sh
$kubectl get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
rethinkdb-driver 10.0.27.114 <none> 28015/TCP db=rethinkdb 10m
[...]
```
**Step 2**
start the first server in the cluster
```sh
$kubectl create -f examples/storage/rethinkdb/rc.yaml
```
Actually, you can start servers as many as you want at one time, just modify the `replicas` in `rc.ymal`
check out again:
```sh
$kubectl get pods
NAME READY REASON RESTARTS AGE
[...]
rethinkdb-rc-r4tb0 1/1 Running 0 1m
```
**Done!**
---
Scale
-----
You can scale up your cluster using `kubectl scale`. The new pod will join to the existing cluster automatically, for example
```sh
$kubectl scale rc rethinkdb-rc --replicas=3
scaled
$kubectl get pods
NAME READY REASON RESTARTS AGE
[...]
rethinkdb-rc-f32c5 1/1 Running 0 1m
rethinkdb-rc-m4d50 1/1 Running 0 1m
rethinkdb-rc-r4tb0 1/1 Running 0 3m
```
Admin
-----
You need a separate pod (labeled as role:admin) to access Web Admin UI
```sh
kubectl create -f examples/storage/rethinkdb/admin-pod.yaml
kubectl create -f examples/storage/rethinkdb/admin-service.yaml
```
find the service
```console
$kubectl get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
[...]
rethinkdb-admin 10.0.131.19 104.197.19.120 8080/TCP db=rethinkdb,role=admin 10m
rethinkdb-driver 10.0.27.114 <none> 28015/TCP db=rethinkdb 20m
```
We request an external load balancer in the [admin-service.yaml](admin-service.yaml) file:
```
type: LoadBalancer
```
The external load balancer allows us to access the service from outside the firewall via an external IP, 104.197.19.120 in this case.
Note that you may need to create a firewall rule to allow the traffic, assuming you are using Google Compute Engine:
```console
$ gcloud compute firewall-rules create rethinkdb --allow=tcp:8080
```
Now you can open a web browser and access to *http://104.197.19.120:8080* to manage your cluster.
**Why not just using pods in replicas?**
This is because kube-proxy will act as a load balancer and send your traffic to different server,
since the ui is not stateless when playing with Web Admin UI will cause `Connection not open on server` error.
- - -
**BTW**
* `gen_pod.sh` is using to generate pod templates for my local cluster,
the generated pods which is using `nodeSelector` to force k8s to schedule containers to my designate nodes, for I need to access persistent data on my host dirs. Note that one needs to label the node before 'nodeSelector' can work, see this [tutorial](https://kubernetes.io/docs/user-guide/node-selection/)
* see [antmanler/rethinkdb-k8s](https://github.com/antmanler/rethinkdb-k8s) for detail
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/storage/rethinkdb/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/storage/rethinkdb/README.md](https://github.com/kubernetes/examples/blob/master/staging/storage/rethinkdb/README.md)

View File

@ -1,113 +1 @@
## Vitess Example
This example shows how to run a [Vitess](http://vitess.io) cluster in Kubernetes.
Vitess is a MySQL clustering system developed at YouTube that makes sharding
transparent to the application layer. It also makes scaling MySQL within
Kubernetes as simple as launching more pods.
The example brings up a database with 2 shards, and then runs a pool of
[sharded guestbook](https://github.com/youtube/vitess/tree/master/examples/kubernetes/guestbook)
pods. The guestbook app was ported from the original
[guestbook](../../../examples/guestbook-go/)
example found elsewhere in this tree, modified to use Vitess as the backend.
For a more detailed, step-by-step explanation of this example setup, see the
[Vitess on Kubernetes](http://vitess.io/getting-started/) guide.
### Prerequisites
You'll need to install [Go 1.4+](https://golang.org/doc/install) to build
`vtctlclient`, the command-line admin tool for Vitess.
We also assume you have a running Kubernetes cluster with `kubectl` pointing to
it by default. See the [Getting Started guides](https://kubernetes.io/docs/getting-started-guides/)
for how to get to that point. Note that your Kubernetes cluster needs to have
enough resources (CPU+RAM) to schedule all the pods. By default, this example
requires a cluster-wide total of at least 6 virtual CPUs and 10GiB RAM. You can
tune these requirements in the
[resource limits](https://kubernetes.io/docs/user-guide/compute-resources.md)
section of each YAML file.
Lastly, you need to open ports 30000-30001 (for the Vitess admin daemon) and 80 (for
the guestbook app) in your firewall. See the
[Services and Firewalls](https://kubernetes.io/docs/user-guide/services-firewalls.md)
guide for examples of how to do that.
### Configure site-local settings
Run the `configure.sh` script to generate a `config.sh` file, which will be used
to customize your cluster settings.
``` console
./configure.sh
```
Currently, we have out-of-the-box support for storing
[backups](http://vitess.io/user-guide/backup-and-restore.html) in
[Google Cloud Storage](https://cloud.google.com/storage/).
If you're using GCS, fill in the fields requested by the configure script.
Note that your Kubernetes cluster must be running on instances with the
`storage-rw` scope for this to work. With Container Engine, you can do this by
passing `--scopes storage-rw` to the `glcoud container clusters create` command.
For other platforms, you'll need to choose the `file` backup storage plugin,
and mount a read-write network volume into the `vttablet` and `vtctld` pods.
For example, you can mount any storage service accessible through NFS into a
Kubernetes volume. Then provide the mount path to the configure script here.
If you prefer to skip setting up a backup volume for the purpose of this example,
you can choose `file` mode and set the path to `/tmp`.
### Start Vitess
``` console
./vitess-up.sh
```
This will run through the steps to bring up Vitess. At the end, you should see
something like this:
``` console
****************************
* Complete!
* Use the following line to make an alias to kvtctl:
* alias kvtctl='$GOPATH/bin/vtctlclient -server 104.197.47.173:30001'
* See the vtctld UI at: http://104.197.47.173:30000
****************************
```
### Start the Guestbook app
``` console
./guestbook-up.sh
```
The guestbook service is configured with `type: LoadBalancer` to tell Kubernetes
to expose it on an external IP. It may take a minute to set up, but you should
soon see the external IP show up under the internal one like this:
``` console
$ kubectl get service guestbook
NAME LABELS SELECTOR IP(S) PORT(S)
guestbook <none> name=guestbook 10.67.253.173 80/TCP
104.197.151.132
```
Visit the external IP in your browser to view the guestbook. Note that in this
modified guestbook, there are multiple pages to demonstrate range-based sharding
in Vitess. Each page number is assigned to one of the shards using a
[consistent hashing](https://en.wikipedia.org/wiki/Consistent_hashing) scheme.
### Tear down
``` console
./guestbook-down.sh
./vitess-down.sh
```
You may also want to remove any firewall rules you created.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/storage/vitess/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/storage/vitess/README.md](https://github.com/kubernetes/examples/blob/master/staging/storage/vitess/README.md)

View File

@ -1,172 +1 @@
# Storm example
Following this example, you will create a functional [Apache
Storm](http://storm.apache.org/) cluster using Kubernetes and
[Docker](http://docker.io).
You will setup an [Apache ZooKeeper](http://zookeeper.apache.org/)
service, a Storm master service (a.k.a. Nimbus server), and a set of
Storm workers (a.k.a. supervisors).
For the impatient expert, jump straight to the [tl;dr](#tldr)
section.
### Sources
Source is freely available at:
* Docker image - https://github.com/mattf/docker-storm
* Docker Trusted Build - https://registry.hub.docker.com/search?q=mattf/storm
## Step Zero: Prerequisites
This example assumes you have a Kubernetes cluster installed and
running, and that you have installed the ```kubectl``` command line
tool somewhere in your path. Please see the [getting
started](https://kubernetes.io/docs/getting-started-guides/) for installation
instructions for your platform.
## Step One: Start your ZooKeeper service
ZooKeeper is a distributed coordination [service](https://kubernetes.io/docs/user-guide/services.md) that Storm uses as a
bootstrap and for state storage.
Use the [`examples/storm/zookeeper.json`](zookeeper.json) file to create a [pod](https://kubernetes.io/docs/user-guide/pods.md) running
the ZooKeeper service.
```sh
$ kubectl create -f examples/storm/zookeeper.json
```
Then, use the [`examples/storm/zookeeper-service.json`](zookeeper-service.json) file to create a
logical service endpoint that Storm can use to access the ZooKeeper
pod.
```sh
$ kubectl create -f examples/storm/zookeeper-service.json
```
You should make sure the ZooKeeper pod is Running and accessible
before proceeding.
### Check to see if ZooKeeper is running
```sh
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
zookeeper 1/1 Running 0 43s
```
### Check to see if ZooKeeper is accessible
```console
$ kubectl get services
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
zookeeper 10.254.139.141 <none> 2181/TCP name=zookeeper 10m
kubernetes 10.0.0.2 <none> 443/TCP <none> 1d
$ echo ruok | nc 10.254.139.141 2181; echo
imok
```
## Step Two: Start your Nimbus service
The Nimbus service is the master (or head) service for a Storm
cluster. It depends on a functional ZooKeeper service.
Use the [`examples/storm/storm-nimbus.json`](storm-nimbus.json) file to create a pod running
the Nimbus service.
```sh
$ kubectl create -f examples/storm/storm-nimbus.json
```
Then, use the [`examples/storm/storm-nimbus-service.json`](storm-nimbus-service.json) file to
create a logical service endpoint that Storm workers can use to access
the Nimbus pod.
```sh
$ kubectl create -f examples/storm/storm-nimbus-service.json
```
Ensure that the Nimbus service is running and functional.
### Check to see if Nimbus is running and accessible
```sh
$ kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
kubernetes component=apiserver,provider=kubernetes <none> 10.254.0.2 443
zookeeper name=zookeeper name=zookeeper 10.254.139.141 2181
nimbus name=nimbus name=nimbus 10.254.115.208 6627
$ sudo docker run -it -w /opt/apache-storm mattf/storm-base sh -c '/configure.sh 10.254.139.141 10.254.115.208; ./bin/storm list'
...
No topologies running.
```
## Step Three: Start your Storm workers
The Storm workers (or supervisors) do the heavy lifting in a Storm
cluster. They run your stream processing topologies and are managed by
the Nimbus service.
The Storm workers need both the ZooKeeper and Nimbus services to be
running.
Use the [`examples/storm/storm-worker-controller.json`](storm-worker-controller.json) file to create a
[replication controller](https://kubernetes.io/docs/user-guide/replication-controller.md) that manages the worker pods.
```sh
$ kubectl create -f examples/storm/storm-worker-controller.json
```
### Check to see if the workers are running
One way to check on the workers is to get information from the
ZooKeeper service about how many clients it has.
```sh
$ echo stat | nc 10.254.139.141 2181; echo
Zookeeper version: 3.4.6--1, built on 10/23/2014 14:18 GMT
Clients:
/192.168.48.0:44187[0](queued=0,recved=1,sent=0)
/192.168.45.0:39568[1](queued=0,recved=14072,sent=14072)
/192.168.86.1:57591[1](queued=0,recved=34,sent=34)
/192.168.8.0:50375[1](queued=0,recved=34,sent=34)
Latency min/avg/max: 0/2/2570
Received: 23199
Sent: 23198
Connections: 4
Outstanding: 0
Zxid: 0xa39
Mode: standalone
Node count: 13
```
There should be one client from the Nimbus service and one per
worker. Ideally, you should get ```stat``` output from ZooKeeper
before and after creating the replication controller.
(Pull requests welcome for alternative ways to validate the workers)
## tl;dr
```kubectl create -f zookeeper.json```
```kubectl create -f zookeeper-service.json```
Make sure the ZooKeeper Pod is running (use: ```kubectl get pods```).
```kubectl create -f storm-nimbus.json```
```kubectl create -f storm-nimbus-service.json```
Make sure the Nimbus Pod is running.
```kubectl create -f storm-worker-controller.json```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/storm/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/storm/README.md](https://github.com/kubernetes/examples/blob/master/staging/storm/README.md)

View File

@ -1,27 +1 @@
[Sysdig Cloud](http://www.sysdig.com/) is a monitoring, alerting, and troubleshooting platform designed to natively support containerized and service-oriented applications.
Sysdig Cloud comes with built-in, first class support for Kubernetes. In order to instrument your Kubernetes environment with Sysdig Cloud, you simply need to install the Sysdig Cloud agent container on each underlying host in your Kubernetes cluster. Sysdig Cloud will automatically begin monitoring all of your hosts, apps, pods, and services, and will also automatically connect to the Kubernetes API to pull relevant metadata about your environment.
# Example Installation Files
Provided here are two example sysdig.yaml files that can be used to automatically deploy the Sysdig Cloud agent container across a Kubernetes cluster.
The recommended method is using daemon sets - minimum kubernetes version 1.1.1.
If daemon sets are not available, then the replication controller method can be used (based on [this hack](https://stackoverflow.com/questions/33377054/how-to-require-one-pod-per-minion-kublet-when-configuring-a-replication-controll/33381862#33381862 )).
# Latest Files
See here for the latest maintained and updated versions of these example files:
https://github.com/draios/sysdig-cloud-scripts/tree/master/agent_deploy/kubernetes
# Install instructions
Please see the Sysdig Cloud support site for the latest documentation:
http://support.sysdigcloud.com/hc/en-us/sections/200959909
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/sysdig-cloud/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/sysdig-cloud/README.md](https://github.com/kubernetes/examples/blob/master/staging/sysdig-cloud/README.md)

View File

@ -1,37 +1 @@
This is a simple web server pod which serves HTML from an AWS EBS
volume.
If you did not use kube-up script, make sure that your minions have the following IAM permissions ([Amazon IAM Roles](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#create-iam-role-console)):
```shell
ec2:AttachVolume
ec2:DetachVolume
ec2:DescribeInstances
ec2:DescribeVolumes
```
Create a volume in the same region as your node.
Add your volume information in the pod description file aws-ebs-web.yaml then create the pod:
```shell
$ kubectl create -f examples/volumes/aws_ebs/aws-ebs-web.yaml
```
Add some data to the volume if is empty:
```sh
$ echo "Hello World" >& /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/{Region}/{Volume ID}/index.html
```
You should now be able to query your web server:
```sh
$ curl <Pod IP address>
$ Hello World
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/aws_ebs/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/volumes/aws_ebs/README.md](https://github.com/kubernetes/examples/blob/master/staging/volumes/aws_ebs/README.md)

View File

@ -1,22 +1 @@
# How to Use it?
On Azure VM, create a Pod using the volume spec based on [azure](azure.yaml).
In the pod, you need to provide the following information:
- *diskName*: (required) the name of the VHD blob object.
- *diskURI*: (required) the URI of the vhd blob object.
- *cachingMode*: (optional) disk caching mode. Must be one of None, ReadOnly, or ReadWrite. Default is None.
- *fsType*: (optional) the filesystem type to mount. Default is ext4.
- *readOnly*: (optional) whether the filesystem is used as readOnly. Default is false.
Launch the Pod:
```console
# kubectl create -f examples/volumes/azure_disk/azure.yaml
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/azure_disk/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/volumes/azure_disk/README.md](https://github.com/kubernetes/examples/blob/master/staging/volumes/azure_disk/README.md)

View File

@ -1,35 +1 @@
# How to Use it?
Install *cifs-utils* on the Kubernetes host. For example, on Fedora based Linux
# yum -y install cifs-utils
Note, as explained in [Azure File Storage for Linux](https://azure.microsoft.com/en-us/documentation/articles/storage-how-to-use-files-linux/), the Linux hosts and the file share must be in the same Azure region.
Obtain an Microsoft Azure storage account and create a [secret](secret/azure-secret.yaml) that contains the base64 encoded Azure Storage account name and key. In the secret file, base64-encode Azure Storage account name and pair it with name *azurestorageaccountname*, and base64-encode Azure Storage access key and pair it with name *azurestorageaccountkey*.
Then create a Pod using the volume spec based on [azure](azure.yaml).
In the pod, you need to provide the following information:
- *secretName*: the name of the secret that contains both Azure storage account name and key.
- *shareName*: The share name to be used.
- *readOnly*: Whether the filesystem is used as readOnly.
Create the secret:
```console
# kubectl create -f examples/volumes/azure_file/secret/azure-secret.yaml
```
You should see the account name and key from `kubectl get secret`
Then create the Pod:
```console
# kubectl create -f examples/volumes/azure_file/azure.yaml
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/azure_file/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/volumes/azure_file/README.md](https://github.com/kubernetes/examples/blob/master/staging/volumes/azure_file/README.md)

View File

@ -1,38 +1 @@
# How to Use it?
Install Ceph on the Kubernetes host. For example, on Fedora 21
# yum -y install ceph
If you don't have a Ceph cluster, you can set up a [containerized Ceph cluster](https://github.com/ceph/ceph-docker/tree/master/examples/kubernetes)
Then get the keyring from the Ceph cluster and copy it to */etc/ceph/keyring*.
Once you have installed Ceph and a Kubernetes cluster, you can create a pod based on my examples [cephfs.yaml](cephfs.yaml) and [cephfs-with-secret.yaml](cephfs-with-secret.yaml). In the pod yaml, you need to provide the following information.
- *monitors*: Array of Ceph monitors.
- *path*: Used as the mounted root, rather than the full Ceph tree. If not provided, default */* is used.
- *user*: The RADOS user name. If not provided, default *admin* is used.
- *secretFile*: The path to the keyring file. If not provided, default */etc/ceph/user.secret* is used.
- *secretRef*: Reference to Ceph authentication secrets. If provided, *secret* overrides *secretFile*.
- *readOnly*: Whether the filesystem is used as readOnly.
Here are the commands:
```console
# kubectl create -f examples/volumes/cephfs/cephfs.yaml
# create a secret if you want to use Ceph secret instead of secret file
# kubectl create -f examples/volumes/cephfs/secret/ceph-secret.yaml
# kubectl create -f examples/volumes/cephfs/cephfs-with-secret.yaml
# kubectl get pods
```
If you ssh to that machine, you can run `docker ps` to see the actual pod and `docker inspect` to see the volumes used by the container.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/cephfs/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/volumes/cephfs/README.md](https://github.com/kubernetes/examples/blob/master/staging/volumes/cephfs/README.md)

View File

@ -1,27 +1 @@
This is a simple web server pod which serves HTML from an Cinder volume.
Create a volume in the same tenant and zone as your node.
Add your volume information in the pod description file cinder-web.yaml then create the pod:
```shell
$ kubectl create -f examples/volumes/cinder/cinder-web.yaml
```
Add some data to the volume if is empty:
```sh
$ echo "Hello World" >& /var/lib/kubelet/plugins/kubernetes.io/cinder/mounts/{Volume ID}/index.html
```
You should now be able to query your web server:
```sh
$ curl <Pod IP address>
$ Hello World
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/cinder/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/volumes/cinder/README.md](https://github.com/kubernetes/examples/blob/master/staging/volumes/cinder/README.md)

View File

@ -1,73 +1 @@
## Step 1. Setting up Fibre Channel Target
On your FC SAN Zone manager, allocate and mask LUNs so Kubernetes hosts can access them.
## Step 2. Creating the Pod with Fibre Channel persistent storage
Once you have installed Fibre Channel initiator and new Kubernetes, you can create a pod based on my example [fc.yaml](fc.yaml). In the pod JSON, you need to provide *targetWWNs* (array of Fibre Channel target's World Wide Names), *lun*, and the type of the filesystem that has been created on the lun, and *readOnly* boolean.
Once your pod is created, run it on the Kubernetes master:
```console
kubectl create -f ./your_new_pod.json
```
Here is my command and output:
```console
# kubectl create -f examples/volumes/fibre_channel/fc.yaml
# kubectl get pods
NAME READY STATUS RESTARTS AGE
fcpd 2/2 Running 0 10m
```
On the Kubernetes host, I got these in mount output
```console
#mount |grep /var/lib/kubelet/plugins/kubernetes.io
/dev/mapper/360a98000324669436c2b45666c567946 on /var/lib/kubelet/plugins/kubernetes.io/fc/500a0982991b8dc5-lun-2 type ext4 (ro,relatime,seclabel,stripe=16,data=ordered)
/dev/mapper/360a98000324669436c2b45666c567944 on /var/lib/kubelet/plugins/kubernetes.io/fc/500a0982991b8dc5-lun-1 type ext4 (rw,relatime,seclabel,stripe=16,data=ordered)
```
If you ssh to that machine, you can run `docker ps` to see the actual pod.
```console
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
090ac457ddc2 kubernetes/pause "/pause" 12 minutes ago Up 12 minutes k8s_fcpd-rw.aae720ec_fcpd_default_4024318f-4121-11e5-a294-e839352ddd54_99eb5415
5e2629cf3e7b kubernetes/pause "/pause" 12 minutes ago Up 12 minutes k8s_fcpd-ro.857720dc_fcpd_default_4024318f-4121-11e5-a294-e839352ddd54_c0175742
2948683253f7 gcr.io/google_containers/pause:0.8.0 "/pause" 12 minutes ago Up 12 minutes k8s_POD.7be6d81d_fcpd_default_4024318f-4121-11e5-a294-e839352ddd54_8d9dd7bf
```
## Multipath
To leverage multiple paths for block storage, it is important to perform the
multipath configuration on the host.
If your distribution does not provide `/etc/multipath.conf`, then you can
either use the following minimalistic one:
defaults {
find_multipaths yes
user_friendly_names yes
}
or create a new one by running:
$ mpathconf --enable
Finally you'll need to ensure to start or reload and enable multipath:
$ systemctl enable multipathd.service
$ systemctl restart multipathd.service
**Note:** Any change to `multipath.conf` or enabling multipath can lead to
inaccessible block devices, because they'll be claimed by multipath and
exposed as a device in /dev/mapper/*.
Some additional informations about multipath can be found in the
[iSCSI documentation](../iscsi/README.md)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/fibre_channel/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/volumes/fibre_channel/README.md](https://github.com/kubernetes/examples/blob/master/staging/volumes/fibre_channel/README.md)

View File

@ -1 +1 @@
Please refer to https://github.com/kubernetes/community/tree/master/contributors/devel/flexvolume.md for documentation.
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/volumes/flexvolume/README.md](https://github.com/kubernetes/examples/blob/master/staging/volumes/flexvolume/README.md)

View File

@ -1,115 +1 @@
## Using Flocker volumes
[Flocker](https://clusterhq.com/flocker) is an open-source clustered container data volume manager. It provides management
and orchestration of data volumes backed by a variety of storage backends.
This example provides information about how to set-up a Flocker installation and configure it in Kubernetes, as well as how to use the plugin to use Flocker datasets as volumes in Kubernetes.
### Prerequisites
A Flocker cluster is required to use Flocker with Kubernetes. A Flocker cluster comprises:
- *Flocker Control Service*: provides a REST over HTTP API to modify the desired configuration of the cluster;
- *Flocker Dataset Agent(s)*: a convergence agent that modifies the cluster state to match the desired configuration;
- *Flocker Container Agent(s)*: a convergence agent that modifies the cluster state to match the desired configuration (unused in this configuration but still required in the cluster).
The Flocker cluster can be installed on the same nodes you are using for Kubernetes. For instance, you can install the Flocker Control Service on the same node as Kubernetes Master and Flocker Dataset/Container Agents on every Kubernetes Slave node.
It is recommended to follow [Installing Flocker](https://docs.clusterhq.com/en/latest/install/index.html) and the instructions below to set-up the Flocker cluster to be used with Kubernetes.
#### Flocker Control Service
The Flocker Control Service should be installed manually on a host. In the future, this may be deployed in pod(s) and exposed as a Kubernetes service.
#### Flocker Agent(s)
The Flocker Agents should be manually installed on *all* Kubernetes nodes. These agents are responsible for (de)attachment and (un)mounting and are therefore services that should be run with appropriate privileges on these hosts.
In order for the plugin to connect to Flocker (via REST API), several environment variables must be specified on *all* Kubernetes nodes. This may be specified in an init script for the node's Kubelet service, for example, you could store the below environment variables in a file called `/etc/flocker/env` and place `EnvironmentFile=/etc/flocker/env` into `/etc/systemd/system/kubelet.service` or wherever the `kubelet.service` file lives.
The environment variables that need to be set are:
- `FLOCKER_CONTROL_SERVICE_HOST` should refer to the hostname of the Control Service
- `FLOCKER_CONTROL_SERVICE_PORT` should refer to the port of the Control Service (the API service defaults to 4523 but this must still be specified)
The following environment variables should refer to keys and certificates on the host that are specific to that host.
- `FLOCKER_CONTROL_SERVICE_CA_FILE` should refer to the full path to the cluster certificate file
- `FLOCKER_CONTROL_SERVICE_CLIENT_KEY_FILE` should refer to the full path to the [api key](https://docs.clusterhq.com/en/latest/config/generate-api-plugin.html) file for the API user
- `FLOCKER_CONTROL_SERVICE_CLIENT_CERT_FILE` should refer to the full path to the [api certificate](https://docs.clusterhq.com/en/latest/config/generate-api-plugin.html) file for the API user
More details regarding cluster authentication can be found at the documentation: [Flocker Cluster Security & Authentication](https://docs.clusterhq.com/en/latest/concepts/security.html) and [Configuring Cluster Authentication](https://docs.clusterhq.com/en/latest/config/configuring-authentication.html).
### Create a pod with a Flocker volume
**Note**: A new dataset must first be provisioned using the Flocker tools or Docker CLI *(To use the Docker CLI, you need the [Flocker plugin for Docker](https://clusterhq.com/docker-plugin/) installed along with Docker 1.9+)*. For example, using the [Volumes CLI](https://docs.clusterhq.com/en/latest/labs/volumes-cli.html), create a new dataset called 'my-flocker-vol' of size 10GB:
```sh
flocker-volumes create -m name=my-flocker-vol -s 10G -n <node-uuid>
# -n or --node= Is the initial primary node for dataset (any unique
# prefix of node uuid, see flocker-volumes list-nodes)
```
The following *volume* spec from the [example pod](flocker-pod.yml) illustrates how to use this Flocker dataset as a volume.
> Note, the [example pod](flocker-pod.yml) used here does not include a replication controller, therefore the POD will not be rescheduled upon failure. If your looking for an example that does include a replication controller and service spec you can use [this example pod including a replication controller](flocker-pod-with-rc.yml)
```yaml
volumes:
- name: www-root
flocker:
datasetName: my-flocker-vol
```
- **datasetName** is the unique name for the Flocker dataset and should match the *name* in the metadata.
Use `kubetctl` to create the pod.
```sh
$ kubectl create -f examples/volumes/flocker/flocker-pod.yml
```
You should now verify that the pod is running and determine it's IP address:
```sh
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
flocker 1/1 Running 0 3m
$ kubectl get pods flocker -t '{{.status.hostIP}}{{"\n"}}'
172.31.25.62
```
An `ls` of the `/flocker` directory on the host (identified by the IP as above) will show the mount point for the volume.
```sh
$ ls /flocker
0cf8789f-00da-4da0-976a-b6b1dc831159
```
You can also see the mountpoint by inspecting the docker container on that host.
```sh
$ docker inspect -f "{{.Mounts}}" <container-id> | grep flocker
...{ /flocker/0cf8789f-00da-4da0-976a-b6b1dc831159 /usr/share/nginx/html true}
```
Add an index.html inside this directory and use `curl` to see this HTML file served up by nginx.
```sh
$ echo "<h1>Hello, World</h1>" | tee /flocker/0cf8789f-00da-4da0-976a-b6b1dc831159/index.html
$ curl ip
```
### More Info
Read more about the [Flocker Cluster Architecture](https://docs.clusterhq.com/en/latest/concepts/architecture.html) and learn more about Flocker by visiting the [Flocker Documentation](https://docs.clusterhq.com/).
#### Video Demo
To see a demo example of using Kubernetes and Flocker, visit [Flocker's blog post on High Availability with Kubernetes and Flocker](https://clusterhq.com/2015/12/22/ha-demo-kubernetes-flocker/)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/flocker/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/volumes/flocker/README.md](https://github.com/kubernetes/examples/blob/master/staging/volumes/flocker/README.md)

View File

@ -1,117 +1 @@
## GlusterFS
[GlusterFS](http://www.gluster.org) is an open source scale-out filesystem. These examples provide information about how to allow containers use GlusterFS volumes.
There are couple of ways to use GlusterFS as a persistent data store in application pods.
*) Static Provisioning of GlusterFS Volumes.
*) Dynamic Provisioning of GlusterFS Volumes.
### Static Provisioning
Static Provisioning of GlusterFS Volumes is analogues to creation of a PV ( Persistent Volume) resource by specifying the parameters in it. This
also need a working GlusterFS cluster/trusted pool available to carve out GlusterFS volumes.
The example assumes that you have already set up a GlusterFS server cluster and have a working GlusterFS volume ready to use in the containers.
#### Prerequisites
* Set up a GlusterFS server cluster
* Create a GlusterFS volume
* If you are not using hyperkube, you may need to install the GlusterFS client package on the Kubernetes nodes ([Guide](http://gluster.readthedocs.io/en/latest/Administrator%20Guide/))
#### Create endpoints
The first step is to create the GlusterFS endpoints definition in Kubernetes. Here is a snippet of [glusterfs-endpoints.json](glusterfs-endpoints.json):
```
"subsets": [
{
"addresses": [{ "ip": "10.240.106.152" }],
"ports": [{ "port": 1 }]
},
{
"addresses": [{ "ip": "10.240.79.157" }],
"ports": [{ "port": 1 }]
}
]
```
The `subsets` field should be populated with the addresses of the nodes in the GlusterFS cluster. It is fine to provide any valid value (from 1 to 65535) in the `port` field.
Create the endpoints:
```sh
$ kubectl create -f examples/volumes/glusterfs/glusterfs-endpoints.json
```
You can verify that the endpoints are successfully created by running
```sh
$ kubectl get endpoints
NAME ENDPOINTS
glusterfs-cluster 10.240.106.152:1,10.240.79.157:1
```
We also need to create a service for these endpoints, so that they will persist. We will add this service without a selector to tell Kubernetes we want to add its endpoints manually. You can see [glusterfs-service.json](glusterfs-service.json) for details.
Use this command to create the service:
```sh
$ kubectl create -f examples/volumes/glusterfs/glusterfs-service.json
```
#### Create a Pod
The following *volume* spec in [glusterfs-pod.json](glusterfs-pod.json) illustrates a sample configuration:
```json
"volumes": [
{
"name": "glusterfsvol",
"glusterfs": {
"endpoints": "glusterfs-cluster",
"path": "kube_vol",
"readOnly": true
}
}
]
```
The parameters are explained as the followings.
- **endpoints** is the name of the Endpoints object that represents a Gluster cluster configuration. *kubelet* is optimized to avoid mount storm, it will randomly pick one from the endpoints to mount. If this host is unresponsive, the next Gluster host in the endpoints is automatically selected.
- **path** is the Glusterfs volume name.
- **readOnly** is the boolean that sets the mountpoint readOnly or readWrite.
Create a pod that has a container using Glusterfs volume,
```sh
$ kubectl create -f examples/volumes/glusterfs/glusterfs-pod.json
```
You can verify that the pod is running:
```sh
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
glusterfs 1/1 Running 0 3m
```
You may execute the command `mount` inside the container to see if the GlusterFS volume is mounted correctly:
```sh
$ kubectl exec glusterfs -- mount | grep gluster
10.240.106.152:kube_vol on /mnt/glusterfs type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
```
You may also run `docker ps` on the host to see the actual container.
### Dynamic Provisioning of GlusterFS Volumes:
Dynamic Provisioning means provisioning of GlusterFS volumes based on a Storage class. Please refer [this guide](./../../persistent-volume-provisioning/README.md)
.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/glusterfs/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/volumes/glusterfs/README.md](https://github.com/kubernetes/examples/blob/master/staging/volumes/glusterfs/README.md)

View File

@ -1,136 +1 @@
## Introduction
The Kubernetes iSCSI implementation can connect to iSCSI devices via open-iscsi and multipathd on Linux.
Currently supported features are
* Connecting to one portal
* Mounting a device directly or via multipathd
* Formatting and partitioning any new device connected
* CHAP authentication
## Prerequisites
This example expects there to be a working iSCSI target to connect to.
If there isn't one in place then it is possible to setup a software version on Linux by following these guides
* [Setup a iSCSI target on Fedora](http://www.server-world.info/en/note?os=Fedora_21&p=iscsi)
* [Install the iSCSI initiator on Fedora](http://www.server-world.info/en/note?os=Fedora_21&p=iscsi&f=2)
* [Install multipathd for mpio support if required](http://www.linuxstories.eu/2014/07/how-to-setup-dm-multipath-on-rhel.html)
## Creating the pod with iSCSI persistent storage
Once you have configured the iSCSI initiator, you can create a pod based on the example *iscsi.yaml*. In the pod YAML, you need to provide *targetPortal* (the iSCSI target's **IP** address and *port* if not the default port 3260), target's *iqn*, *lun*, and the type of the filesystem that has been created on the lun, and *readOnly* boolean. No initiator information is required. If you have more than one target portals for a single IQN, you can mention other portal IPs in *portals* field.
If you want to use an iSCSI offload card or other open-iscsi transports besides tcp, setup an iSCSI interface and provide *iscsiInterface* in the pod YAML. The default name for an iscsi iface (open-iscsi parameter iface.iscsi\_ifacename) is in the format transport\_name.hwaddress when generated by iscsiadm. See [open-iscsi](http://www.open-iscsi.org/docs/README) or [openstack](http://docs.openstack.org/kilo/config-reference/content/iscsi-iface-config.html) for detailed configuration information.
**Note:** If you have followed the instructions in the links above you
may have partitioned the device, the iSCSI volume plugin does not
currently support partitions so format the device as one partition or leave the device raw and Kubernetes will partition and format it one first mount.
### CHAP Authentication
To enable one-way or two-way CHAP authentication for discovery or session, following these steps.
* Set `chapAuthDiscovery` to `true` for discovery authentication.
* Set `chapAuthSession` to `true` for session authentication.
* Create a CHAP secret and set `secretRef` to reference the CHAP secret.
Example can be found at [iscsi-chap.yaml](iscsi-chap.yaml)
### CHAP Secret
As illustrated in [chap-secret.yaml](chap-secret.yaml), the secret must have type `kubernetes.io/iscsi-chap` and consists of the following keys:
```yaml
---
apiVersion: v1
kind: Secret
metadata:
name: chap-secret
type: "kubernetes.io/iscsi-chap"
data:
discovery.sendtargets.auth.username:
discovery.sendtargets.auth.password:
discovery.sendtargets.auth.username_in:
discovery.sendtargets.auth.password_in:
node.session.auth.username:
node.session.auth.password:
node.session.auth.username_in:
node.session.auth.password_in:
```
These keys map to those used by Open-iSCSI initiator. Detailed documents on these keys can be found at [Open-iSCSI](https://github.com/open-iscsi/open-iscsi/blob/master/etc/iscsid.conf)
#### Create CHAP secret before creating iSCSI volumes and Pods
```console
# kubectl create -f examples/volumes/iscsi/chap-iscsi.yaml
```
Once the pod config is created, run it on the Kubernetes master:
```console
kubectl create -f ./your_new_pod.yaml
```
Here is the example pod created and expected output:
```console
# kubectl create -f examples/volumes/iscsi/iscsi.yaml
# kubectl get pods
NAME READY STATUS RESTARTS AGE
iscsipd 2/2 RUNNING 0 2m
```
On the Kubernetes node, verify the mount output
For a non mpio device the output should look like the following
```console
# mount |grep kub
/dev/sdb on /var/lib/kubelet/plugins/kubernetes.io/iscsi/10.0.2.15:3260-iqn.2001-04.com.example:storage.kube.sys1.xyz-lun-0 type ext4 (rw,relatime,data=ordered)
/dev/sdb on /var/lib/kubelet/pods/f527ca5b-6d87-11e5-aa7e-080027ff6387/volumes/kubernetes.io~iscsi/iscsipd-rw type ext4 (ro,relatime,data=ordered)
/dev/sdc on /var/lib/kubelet/plugins/kubernetes.io/iscsi/10.0.2.16:3260-iqn.2001-04.com.example:storage.kube.sys1.xyz-lun-0 type ext4 (rw,relatime,data=ordered)
/dev/sdc on /var/lib/kubelet/pods/f527ca5b-6d87-11e5-aa7e-080027ff6387/volumes/kubernetes.io~iscsi/iscsipd-rw type ext4 (rw,relatime,data=ordered)
/dev/sdd on /var/lib/kubelet/plugins/kubernetes.io/iscsi/10.0.2.17:3260-iqn.2001-04.com.example:storage.kube.sys1.xyz-lun-0 type ext4 (rw,relatime,data=ordered)
/dev/sdd on /var/lib/kubelet/pods/f527ca5b-6d87-11e5-aa7e-080027ff6387/volumes/kubernetes.io~iscsi/iscsipd-rw type ext4 (rw,relatime,data=ordered)
```
And for a node with mpio enabled the expected output would be similar to the following
```console
# mount |grep kub
/dev/mapper/mpatha on /var/lib/kubelet/plugins/kubernetes.io/iscsi/10.0.2.15:3260-iqn.2001-04.com.example:storage.kube.sys1.xyz-lun-0 type ext4 (rw,relatime,data=ordered)
/dev/mapper/mpatha on /var/lib/kubelet/pods/f527ca5b-6d87-11e5-aa7e-080027ff6387/volumes/kubernetes.io~iscsi/iscsipd-ro type ext4 (ro,relatime,data=ordered)
/dev/mapper/mpathb on /var/lib/kubelet/plugins/kubernetes.io/iscsi/10.0.2.16:3260-iqn.2001-04.com.example:storage.kube.sys1.xyz-lun-0 type ext4 (rw,relatime,data=ordered)
/dev/mapper/mpathb on /var/lib/kubelet/pods/f527ca5b-6d87-11e5-aa7e-080027ff6387/volumes/kubernetes.io~iscsi/iscsipd-rw type ext4 (rw,relatime,data=ordered)
/dev/mapper/mpathc on /var/lib/kubelet/plugins/kubernetes.io/iscsi/10.0.2.17:3260-iqn.2001-04.com.example:storage.kube.sys1.xyz-lun-0 type ext4 (rw,relatime,data=ordered)
/dev/mapper/mpathb on /var/lib/kubelet/pods/f527ca5b-6d87-11e5-aa7e-080027ff6387/volumes/kubernetes.io~iscsi/iscsipd-rw type ext4 (rw,relatime,data=ordered)
```
If you ssh to that machine, you can run `docker ps` to see the actual pod.
```console
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3b8a772515d2 kubernetes/pause "/pause" 6 minutes ago Up 6 minutes k8s_iscsipd-rw.ed58ec4e_iscsipd_default_f527ca5b-6d87-11e5-aa7e-080027ff6387_d25592c5
```
Run *docker inspect* and verify the container mounted the host directory into the their */mnt/iscsipd* directory.
```console
# docker inspect --format '{{ range .Mounts }}{{ if eq .Destination "/mnt/iscsipd" }}{{ .Source }}{{ end }}{{ end }}' f855336407f4
/var/lib/kubelet/pods/f527ca5b-6d87-11e5-aa7e-080027ff6387/volumes/kubernetes.io~iscsi/iscsipd-ro
# docker inspect --format '{{ range .Mounts }}{{ if eq .Destination "/mnt/iscsipd" }}{{ .Source }}{{ end }}{{ end }}' 3b8a772515d2
/var/lib/kubelet/pods/f527ca5b-6d87-11e5-aa7e-080027ff6387/volumes/kubernetes.io~iscsi/iscsipd-rw
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/iscsi/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/volumes/iscsi/README.md](https://github.com/kubernetes/examples/blob/master/staging/volumes/iscsi/README.md)

View File

@ -1,166 +1 @@
# Outline
This example describes how to create Web frontend server, an auto-provisioned persistent volume on GCE, and an NFS-backed persistent claim.
Demonstrated Kubernetes Concepts:
* [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) to
define persistent disks (disk lifecycle not tied to the Pods).
* [Services](https://kubernetes.io/docs/concepts/services-networking/service/) to enable Pods to
locate one another.
![alt text][nfs pv example]
As illustrated above, two persistent volumes are used in this example:
- Web frontend Pod uses a persistent volume based on NFS server, and
- NFS server uses an auto provisioned [persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) from GCE PD or AWS EBS.
Note, this example uses an NFS container that doesn't support NFSv4.
[nfs pv example]: nfs-pv.png
## Quickstart
```console
$ kubectl create -f examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml
$ kubectl create -f examples/volumes/nfs/nfs-server-rc.yaml
$ kubectl create -f examples/volumes/nfs/nfs-server-service.yaml
# get the cluster IP of the server using the following command
$ kubectl describe services nfs-server
# use the NFS server IP to update nfs-pv.yaml and execute the following
$ kubectl create -f examples/volumes/nfs/nfs-pv.yaml
$ kubectl create -f examples/volumes/nfs/nfs-pvc.yaml
# run a fake backend
$ kubectl create -f examples/volumes/nfs/nfs-busybox-rc.yaml
# get pod name from this command
$ kubectl get pod -l name=nfs-busybox
# use the pod name to check the test file
$ kubectl exec nfs-busybox-jdhf3 -- cat /mnt/index.html
```
## Example of NFS based persistent volume
See [NFS Service and Replication Controller](nfs-web-rc.yaml) for a quick example of how to use an NFS
volume claim in a replication controller. It relies on the
[NFS persistent volume](nfs-pv.yaml) and
[NFS persistent volume claim](nfs-pvc.yaml) in this example as well.
## Complete setup
The example below shows how to export a NFS share from a single pod replication
controller and import it into two replication controllers.
### NFS server part
Define [the NFS Service and Replication Controller](nfs-server-rc.yaml) and
[NFS service](nfs-server-service.yaml):
The NFS server exports an an auto-provisioned persistent volume backed by GCE PD:
```console
$ kubectl create -f examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml
```
```console
$ kubectl create -f examples/volumes/nfs/nfs-server-rc.yaml
$ kubectl create -f examples/volumes/nfs/nfs-server-service.yaml
```
The directory contains dummy `index.html`. Wait until the pod is running
by checking `kubectl get pods -l role=nfs-server`.
### Create the NFS based persistent volume claim
The [NFS busybox controller](nfs-busybox-rc.yaml) uses a simple script to
generate data written to the NFS server we just started. First, you'll need to
find the cluster IP of the server:
```console
$ kubectl describe services nfs-server
```
Replace the invalid IP in the [nfs PV](nfs-pv.yaml). (In the future,
we'll be able to tie these together using the service names, but for
now, you have to hardcode the IP.)
Create the [persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
and the persistent volume claim for your NFS server. The persistent volume and
claim gives us an indirection that allow multiple pods to refer to the NFS
server using a symbolic name rather than the hardcoded server address.
```console
$ kubectl create -f examples/volumes/nfs/nfs-pv.yaml
$ kubectl create -f examples/volumes/nfs/nfs-pvc.yaml
```
## Setup the fake backend
The [NFS busybox controller](nfs-busybox-rc.yaml) updates `index.html` on the
NFS server every 10 seconds. Let's start that now:
```console
$ kubectl create -f examples/volumes/nfs/nfs-busybox-rc.yaml
```
Conveniently, it's also a `busybox` pod, so we can get an early check
that our mounts are working now. Find a busybox pod and exec:
```console
$ kubectl get pod -l name=nfs-busybox
NAME READY STATUS RESTARTS AGE
nfs-busybox-jdhf3 1/1 Running 0 25m
nfs-busybox-w3s4t 1/1 Running 0 25m
$ kubectl exec nfs-busybox-jdhf3 -- cat /mnt/index.html
Thu Oct 22 19:20:18 UTC 2015
nfs-busybox-w3s4t
```
You should see output similar to the above if everything is working well. If
it's not, make sure you changed the invalid IP in the [NFS PV](nfs-pv.yaml) file
and make sure the `describe services` command above had endpoints listed
(indicating the service was associated with a running pod).
### Setup the web server
The [web server controller](nfs-web-rc.yaml) is an another simple replication
controller demonstrates reading from the NFS share exported above as a NFS
volume and runs a simple web server on it.
Define the pod:
```console
$ kubectl create -f examples/volumes/nfs/nfs-web-rc.yaml
```
This creates two pods, each of which serve the `index.html` from above. We can
then use a simple service to front it:
```console
kubectl create -f examples/volumes/nfs/nfs-web-service.yaml
```
We can then use the busybox container we launched before to check that `nginx`
is serving the data appropriately:
```console
$ kubectl get pod -l name=nfs-busybox
NAME READY STATUS RESTARTS AGE
nfs-busybox-jdhf3 1/1 Running 0 1h
nfs-busybox-w3s4t 1/1 Running 0 1h
$ kubectl get services nfs-web
NAME LABELS SELECTOR IP(S) PORT(S)
nfs-web <none> role=web-frontend 10.0.68.37 80/TCP
$ kubectl exec nfs-busybox-jdhf3 -- wget -qO- http://10.0.68.37
Thu Oct 22 19:28:55 UTC 2015
nfs-busybox-w3s4t
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/nfs/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/README.md](https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/README.md)

View File

@ -1,13 +1 @@
# NFS-exporter container with a file
This container exports /exports with index.html in it via NFS. Based on
../exports. Since some Linux kernels have issues running NFSv4 daemons in containers,
only NFSv3 is opened in this container.
Available as `gcr.io/google-samples/nfs-server`
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/nfs/nfs-data/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/nfs-data/README.md](https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/nfs-data/README.md)

View File

@ -1,370 +1 @@
# Portworx Volume
- [Portworx](#portworx)
- [Prerequisites](#prerequisites)
- [Examples](#examples)
- [Using Pre-provisioned Portworx Volumes](#pre-provisioned)
- [Running Pod](#running-pod)
- [Persistent Volumes](#persistent-volumes)
- [Using Dynamic Provisioning](#dynamic-provisioning)
- [Storage Class](#storage-class)
## Portworx
[Portworx](http://www.portworx.com) can be used as a storage provider for your Kubernetes cluster. Portworx pools your servers capacity and turns your servers
or cloud instances into converged, highly available compute and storage nodes
## Prerequisites
- A Portworx instance running on all of your Kubernetes nodes. For
more information on how you can install Portworx can be found [here](http://docs.portworx.com)
## Examples
The following examples assumes that you already have a running Kubernetes cluster with Portworx installed on all nodes.
### Using Pre-provisioned Portworx Volumes
Create a Volume using Portworx CLI.
On one of the Kubernetes nodes with Portworx installed run the following command
```shell
/opt/pwx/bin/pxctl volume create <vol-id> --size <size> --fs <fs-type>
```
#### Running Pods
Create Pod which uses Portworx Volumes
Example spec:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: test-portworx-volume-pod
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-portworx-volume
name: test-volume
volumes:
- name: test-volume
# This Portworx volume must already exist.
portworxVolume:
volumeID: "<vol-id>"
fsType: "<fs-type>"
```
[Download example](portworx-volume-pod.yaml?raw=true)
Make sure to replace <vol-id> and <fs-type> in the above spec with
the ones that you used while creating the volume.
Create the Pod.
``` bash
$ kubectl create -f examples/volumes/portworx/portworx-volume-pod.yaml
```
Verify that pod is running:
```bash
$ kubectl.sh get pods
NAME READY STATUS RESTARTS AGE
test-portworx-volume-pod 1/1 Running 0 16s
```
#### Persistent Volumes
1. Create Persistent Volume.
Example spec:
```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: <vol-id>
spec:
capacity:
storage: <size>Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
portworxVolume:
volumeID: "<vol-id>"
fsType: "<fs-type>"
```
Make sure to replace <vol-id>, <size> and <fs-type> in the above spec with
the ones that you used while creating the volume.
[Download example](portworx-volume-pv.yaml?raw=true)
Creating the persistent volume:
``` bash
$ kubectl create -f examples/volumes/portworx/portworx-volume-pv.yaml
```
Verifying persistent volume is created:
``` bash
$ kubectl describe pv pv0001
Name: pv0001
Labels: <none>
StorageClass:
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 2Gi
Message:
Source:
Type: PortworxVolume (a Portworx Persistent Volume resource)
VolumeID: pv0001
FSType: ext4
No events.
```
2. Create Persistent Volume Claim.
Example spec:
```yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc0001
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: <size>Gi
```
[Download example](portworx-volume-pvc.yaml?raw=true)
Creating the persistent volume claim:
``` bash
$ kubectl create -f examples/volumes/portworx/portworx-volume-pvc.yaml
```
Verifying persistent volume claim is created:
``` bash
$ kubectl describe pvc pvc0001
Name: pvc0001
Namespace: default
Status: Bound
Volume: pv0001
Labels: <none>
Capacity: 2Gi
Access Modes: RWO
No events.
```
3. Create Pod which uses Persistent Volume Claim.
See example:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: pvpod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/test-webserver
volumeMounts:
- name: test-volume
mountPath: /test-portworx-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: pvc0001
```
[Download example](portworx-volume-pvcpod.yaml?raw=true)
Creating the pod:
``` bash
$ kubectl create -f examples/volumes/portworx/portworx-volume-pvcpod.yaml
```
Verifying pod is created:
``` bash
$ kubectl get pod pvpod
NAME READY STATUS RESTARTS AGE
pvpod 1/1 Running 0 48m
```
### Using Dynamic Provisioning
Using Dynamic Provisioning and Storage Classes you don't need to
create Portworx volumes out of band and they will be created automatically.
#### Storage Class
Using Storage Classes objects an admin can define the different classes of Portworx Volumes
that are offered in a cluster. Following are the different parameters that can be used to define a Portworx
Storage Class
* `fs`: filesystem to be laid out: none|xfs|ext4 (default: `ext4`)
* `block_size`: block size in Kbytes (default: `32`)
* `repl`: replication factor [1..3] (default: `1`)
* `io_priority`: IO Priority: [high|medium|low] (default: `low`)
* `snap_interval`: snapshot interval in minutes, 0 disables snaps (default: `0`)
* `aggregation_level`: specifies the number of replication sets the volume can be aggregated from (default: `1`)
* `ephemeral`: ephemeral storage [true|false] (default `false`)
1. Create Storage Class.
See example:
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: portworx-io-priority-high
provisioner: kubernetes.io/portworx-volume
parameters:
repl: "1"
snap_interval: "70"
io_priority: "high"
```
[Download example](portworx-volume-sc-high.yaml?raw=true)
Creating the storageclass:
``` bash
$ kubectl create -f examples/volumes/portworx/portworx-volume-sc-high.yaml
```
Verifying storage class is created:
``` bash
$ kubectl describe storageclass portworx-io-priority-high
Name: portworx-io-priority-high
IsDefaultClass: No
Annotations: <none>
Provisioner: kubernetes.io/portworx-volume
Parameters: io_priority=high,repl=1,snapshot_interval=70
No events.
```
2. Create Persistent Volume Claim.
See example:
```yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvcsc001
annotations:
volume.beta.kubernetes.io/storage-class: portworx-io-priority-high
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
```
[Download example](portworx-volume-pvcsc.yaml?raw=true)
Creating the persistent volume claim:
``` bash
$ kubectl create -f examples/volumes/portworx/portworx-volume-pvcsc.yaml
```
Verifying persistent volume claim is created:
``` bash
$ kubectl describe pvc pvcsc001
Name: pvcsc001
Namespace: default
StorageClass: portworx-io-priority-high
Status: Bound
Volume: pvc-e5578707-c626-11e6-baf6-08002729a32b
Labels: <none>
Capacity: 2Gi
Access Modes: RWO
No Events
```
Persistent Volume is automatically created and is bounded to this pvc.
Verifying persistent volume claim is created:
``` bash
$ kubectl describe pv pvc-e5578707-c626-11e6-baf6-08002729a32b
Name: pvc-e5578707-c626-11e6-baf6-08002729a32b
Labels: <none>
StorageClass: portworx-io-priority-high
Status: Bound
Claim: default/pvcsc001
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 2Gi
Message:
Source:
Type: PortworxVolume (a Portworx Persistent Volume resource)
VolumeID: 374093969022973811
No events.
```
3. Create Pod which uses Persistent Volume Claim with storage class.
See example:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: pvpod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/test-webserver
volumeMounts:
- name: test-volume
mountPath: /test-portworx-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: pvcsc001
```
[Download example](portworx-volume-pvcscpod.yaml?raw=true)
Creating the pod:
``` bash
$ kubectl create -f examples/volumes/portworx/portworx-volume-pvcscpod.yaml
```
Verifying pod is created:
``` bash
$ kubectl get pod pvpod
NAME READY STATUS RESTARTS AGE
pvpod 1/1 Running 0 48m
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/portworx/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/volumes/portworx/README.md](https://github.com/kubernetes/examples/blob/master/staging/volumes/portworx/README.md)

View File

@ -1,98 +1 @@
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Quobyte Volume](#quobyte-volume)
- [Quobyte](#quobyte)
- [Prerequisites](#prerequisites)
- [Fixed user Mounts](#fixed-user-mounts)
- [Creating a pod](#creating-a-pod)
<!-- END MUNGE: GENERATED_TOC -->
# Quobyte Volume
## Quobyte
[Quobyte](http://www.quobyte.com) is software that turns commodity servers into a reliable and highly automated multi-data center file system.
The example assumes that you already have a running Kubernetes cluster and you already have setup Quobyte-Client (1.3+) on each Kubernetes node.
### Prerequisites
- Running Quobyte storage cluster
- Quobyte client (1.3+) installed on the Kubernetes nodes more information how you can install Quobyte on your Kubernetes nodes, can be found in the [documentation](https://support.quobyte.com) of Quobyte.
- To get access to Quobyte and the documentation please [contact us](http://www.quobyte.com/get-quobyte)
- Already created Quobyte Volume
- Added the line `allow-usermapping-in-volumename` in `/etc/quobyte/client.cfg` to allow the fixed user mounts
### Fixed user Mounts
Quobyte supports since 1.3 fixed user mounts. The fixed-user mounts simply allow to mount all Quobyte Volumes inside one directory and use them as different users. All access to the Quobyte Volume will be rewritten to the specified user and group both are optional, independent of the user inside the container. You can read more about it [here](https://blog.inovex.de/docker-plugins) under the section "Quobyte Mount and Docker — whats special"
## Creating a pod
See example:
<!-- BEGIN MUNGE: EXAMPLE ./quobyte-pod.yaml -->
```yaml
apiVersion: v1
kind: Pod
metadata:
name: quobyte
spec:
containers:
- name: quobyte
image: kubernetes/pause
volumeMounts:
- mountPath: /mnt
name: quobytevolume
volumes:
- name: quobytevolume
quobyte:
registry: registry:7861
volume: testVolume
readOnly: false
user: root
group: root
```
[Download example](quobyte-pod.yaml?raw=true)
<!-- END MUNGE: EXAMPLE ./quobyte-pod.yaml -->
Parameters:
* **registry** Quobyte registry to use to mount the volume. You can specify the registry as <host>:<port> pair or if you want to specify multiple registries you just have to put a comma between them e.q. <host1>:<port>,<host2>:<port>,<host3>:<port>. The host can be an IP address or if you have a working DNS you can also provide the DNS names.
* **volume** volume represents a Quobyte volume which must be created before usage.
* **readOnly** is the boolean that sets the mountpoint readOnly or readWrite.
* **user** maps all access to this user. Default is `root`.
* **group** maps all access to this group. Default is `nfsnobody`.
Creating the pod:
```bash
$ kubectl create -f examples/volumes/quobyte/quobyte-pod.yaml
```
Verify that the pod is running:
```bash
$ kubectl get pods quobyte
NAME READY STATUS RESTARTS AGE
quobyte 1/1 Running 0 48m
$ kubectl get pods quobyte --template '{{.status.hostIP}}{{"\n"}}'
10.245.1.3
```
SSH onto the Machine and validate that quobyte is mounted:
```bash
$ mount | grep quobyte
quobyte@10.239.10.21:7861/ on /var/lib/kubelet/plugins/kubernetes.io~quobyte type fuse (rw,nosuid,nodev,noatime,user_id=0,group_id=0,default_permissions,allow_other)
$ docker inspect --format '{{ range .Mounts }}{{ if eq .Destination "/mnt"}}{{ .Source }}{{ end }}{{ end }}' 55ab97593cd3
/var/lib/kubelet/plugins/kubernetes.io~quobyte/root#root@testVolume
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/quobyte/Readme.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/volumes/quobyte/Readme.md](https://github.com/kubernetes/examples/blob/master/staging/volumes/quobyte/Readme.md)

View File

@ -1,59 +1 @@
# How to Use it?
Install Ceph on the Kubernetes host. For example, on Fedora 21
# yum -y install ceph-common
If you don't have a Ceph cluster, you can set up a [containerized Ceph cluster](https://github.com/ceph/ceph-docker)
Then get the keyring from the Ceph cluster and copy it to */etc/ceph/keyring*.
Once you have installed Ceph and new Kubernetes, you can create a pod based on my examples [rbd.json](rbd.json) [rbd-with-secret.json](rbd-with-secret.json). In the pod JSON, you need to provide the following information.
- *monitors*: Ceph monitors.
- *pool*: The name of the RADOS pool, if not provided, default *rbd* pool is used.
- *image*: The image name that rbd has created.
- *user*: The RADOS user name. If not provided, default *admin* is used.
- *keyring*: The path to the keyring file. If not provided, default */etc/ceph/keyring* is used.
- *secretName*: The name of the authentication secrets. If provided, *secretName* overrides *keyring*. Note, see below about how to create a secret.
- *fsType*: The filesystem type (ext4, xfs, etc) that formatted on the device.
- *readOnly*: Whether the filesystem is used as readOnly.
# Use Ceph Authentication Secret
If Ceph authentication secret is provided, the secret should be first be *base64 encoded*, then encoded string is placed in a secret yaml. For example, getting Ceph user `kube`'s base64 encoded secret can use the following command:
```console
# grep key /etc/ceph/ceph.client.kube.keyring |awk '{printf "%s", $NF}'|base64
QVFBTWdYaFZ3QkNlRGhBQTlubFBhRnlmVVNhdEdENGRyRldEdlE9PQ==
```
An example yaml is provided [here](secret/ceph-secret.yaml). Then post the secret through ```kubectl``` in the following command.
```console
# kubectl create -f examples/volumes/rbd/secret/ceph-secret.yaml
```
# Get started
Here are my commands:
```console
# kubectl create -f examples/volumes/rbd/rbd.json
# kubectl get pods
```
On the Kubernetes host, I got these in mount output
```console
#mount |grep kub
/dev/rbd0 on /var/lib/kubelet/plugins/kubernetes.io/rbd/rbd/kube-image-foo type ext4 (ro,relatime,stripe=4096,data=ordered)
/dev/rbd0 on /var/lib/kubelet/pods/ec2166b4-de07-11e4-aaf5-d4bed9b39058/volumes/kubernetes.io~rbd/rbdpd type ext4 (ro,relatime,stripe=4096,data=ordered)
```
If you ssh to that machine, you can run `docker ps` to see the actual pod and `docker inspect` to see the volumes used by the container.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/rbd/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/volumes/rbd/README.md](https://github.com/kubernetes/examples/blob/master/staging/volumes/rbd/README.md)

View File

@ -1,302 +1 @@
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
<!-- BEGIN STRIP_FOR_RELEASE -->
<img src="http://kubernetes.io/kubernetes/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/kubernetes/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/kubernetes/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/kubernetes/img/warning.png" alt="WARNING"
width="25" height="25">
<img src="http://kubernetes.io/kubernetes/img/warning.png" alt="WARNING"
width="25" height="25">
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
If you are using a released version of Kubernetes, you should
refer to the docs that go with that version.
Documentation for other releases can be found at
[releases.k8s.io](http://releases.k8s.io).
</strong>
--
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Dell EMC ScaleIO Volume Plugin for Kubernetes
This document shows how to configure Kubernetes resources to consume storage from volumes hosted on ScaleIO cluster.
## Pre-Requisites
* Kubernetes ver 1.6 or later
* ScaleIO ver 2.0 or later
* A ScaleIO cluster with an API gateway
* ScaleIO SDC binary installed/configured on each Kubernetes node that will consume storage
## ScaleIO Setup
This document assumes you are familiar with ScaleIO and have a cluster ready to go. If you are *not familiar* with ScaleIO, please review *Learn how to setup a 3-node* [ScaleIO cluster on Vagrant](https://github.com/codedellemc/labs/tree/master/setup-scaleio-vagrant) and see *General instructions on* [setting up ScaleIO](https://www.emc.com/products-solutions/trial-software-download/scaleio.htm)
For this demonstration, ensure the following:
- The ScaleIO `SDC` component is installed and properly configured on all Kubernetes nodes where deployed pods will consume ScaleIO-backed volumes.
- You have a configured ScaleIO gateway that is accessible from the Kubernetes nodes.
## Deploy Kubernetes Secret for ScaleIO
The ScaleIO plugin uses a Kubernetes Secret object to store the `username` and `password` credentials.
Kuberenetes requires the secret values to be base64-encoded to simply obfuscate (not encrypt) the clear text as shown below.
```
$> echo -n "siouser" | base64
c2lvdXNlcg==
$> echo -n "sc@l3I0" | base64
c2NAbDNJMA==
```
The previous will generate `base64-encoded` values for the username and password.
Remember to generate the credentials for your own environment and copy them in a secret file similar to the following.
File: [secret.yaml](secret.yaml)
```
apiVersion: v1
kind: Secret
metadata:
name: sio-secret
type: kubernetes.io/scaleio
data:
username: c2lvdXNlcg==
password: c2NAbDNJMA==
```
Notice the name of the secret specified above as `sio-secret`. It will be referred in other YAML files. Next, deploy the secret.
```
$ kubectl create -f ./examples/volumes/scaleio/secret.yaml
```
## Deploying Pods with Persistent Volumes
The example presented in this section shows how the ScaleIO volume plugin can automatically attach, format, and mount an existing ScaleIO volume for pod.
The Kubernetes ScaleIO volume spec supports the following attributes:
| Attribute | Description |
|-----------|-------------|
| gateway | address to a ScaleIO API gateway (required)|
| system | the name of the ScaleIO system (required)|
| protectionDomain| the name of the ScaleIO protection domain (default `default`)|
| storagePool| the name of the volume storage pool (default `default`)|
| storageMode| the storage provision mode: `ThinProvisionned` (default) or `ThickProvisionned`|
| volumeName| the name of an existing volume in ScaleIO (required)|
| secretRef:name| reference to a configured Secret object (required, see Secret earlier)|
| readOnly| specifies the access mode to the mounted volume (default `false`)|
| fsType| the file system to use for the volume (default `ext4`)|
### Create Volume
Static persistent volumes require that the volume, to be consumed by the pod, be already created in ScaleIO. You can use your ScaleIO tooling to create a new volume or use the name of a volume that already exists in ScaleIO. For this demo, we assume there's a volume named `vol-0`. If you want to use an existing volume, ensure its name is reflected properly in the `volumeName` attribute below.
### Deploy Pod YAML
Create a pod YAML file that declares the volume (above) to be used.
File: [pod.yaml](pod.yaml)
```
apiVersion: v1
kind: Pod
metadata:
name: pod-0
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: pod-0
volumeMounts:
- mountPath: /test-pd
name: vol-0
volumes:
- name: vol-0
scaleIO:
gateway: https://localhost:443/api
system: scaleio
volumeName: vol-0
secretRef:
name: sio-secret
fsType: xfs
```
Notice the followings in the previous YAML:
- Update the `gatewway` to point to your ScaleIO gateway endpoint.
- The `volumeName` attribute refers to the name of an existing volume in ScaleIO.
- The `secretRef:name` attribute references the name of the secret object deployed earlier.
Next, deploy the pod.
```
$> kubectl create -f examples/volumes/scaleio/pod.yaml
```
You can verify the pod:
```
$> kubectl get pod
NAME READY STATUS RESTARTS AGE
pod-0 1/1 Running 0 33s
```
Or for more detail, use
```
kubectl describe pod pod-0
```
You can see the attached/mapped volume on the node:
```
$> lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
...
scinia 252:0 0 8G 0 disk /var/lib/kubelet/pods/135986c7-dcb7-11e6-9fbf-080027c990a7/volumes/kubernetes.io~scaleio/vol-0
```
## StorageClass and Dynamic Provisioning
In the example in this section, we will see how the ScaleIO volume plugin can automatically provision described in a `StorageClass`.
The ScaleIO volume plugin is a dynamic provisioner identified as `kubernetes.io/scaleio` and supports the following parameters:
| Parameter | Description |
|-----------|-------------|
| gateway | address to a ScaleIO API gateway (required)|
| system | the name of the ScaleIO system (required)|
| protectionDomain| the name of the ScaleIO protection domain (default `default`)|
| storagePool| the name of the volume storage pool (default `default`)|
| storageMode| the storage provision mode: `ThinProvisionned` (default) or `ThickProvisionned`|
| secretRef| reference to the name of a configured Secret object (required)|
| readOnly| specifies the access mode to the mounted volume (default `false`)|
| fsType| the file system to use for the volume (default `ext4`)|
### ScaleIO StorageClass
Define a new `StorageClass` as shown in the following YAML.
File [sc.yaml](sc.yaml)
```
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: sio-small
provisioner: kubernetes.io/scaleio
parameters:
gateway: https://localhost:443/api
system: scaleio
protectionDomain: default
secretRef: sio-secret
fsType: xfs
```
Note the followings:
- The `name` attribute is set to sio-small . It will be referenced later.
- The `secretRef` attribute matches the name of the Secret object created earlier.
Next, deploy the storage class file.
```
$> kubectl create -f examples/volumes/scaleio/sc.yaml
$> kubectl get sc
NAME TYPE
sio-small kubernetes.io/scaleio
```
### PVC for the StorageClass
The next step is to define/deploy a `PersistentVolumeClaim` that will use the StorageClass.
File [sc-pvc.yaml](sc-pvc.yaml)
```
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-sio-small
annotations:
volume.beta.kubernetes.io/storage-class: sio-small
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
```
Note the `annotations:` entry which specifies annotation `volume.beta.kubernetes.io/storage-class: sio-small` which references the name of the storage class defined earlier.
Next, we deploy PVC file for the storage class. This step will cause the Kubernetes ScaleIO plugin to create the volume in the storage system.
```
$> kubectl create -f examples/volumes/scaleio/sc-pvc.yaml
```
You verify that a new volume created in the ScaleIO dashboard. You can also verify the newly created volume as follows.
```
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
pvc-sio-small Bound pvc-5fc78518-dcae-11e6-a263-080027c990a7 10Gi RWO 1h
```
###Pod for PVC and SC
At this point, the volume is created (by the claim) in the storage system. To use it, we must define a pod that references the volume as done in this YAML.
File [pod-sc-pvc.yaml](pod-sc-pvc.yaml)
```
kind: Pod
apiVersion: v1
metadata:
name: pod-sio-small
spec:
containers:
- name: pod-sio-small-container
image: gcr.io/google_containers/test-webserver
volumeMounts:
- mountPath: /test
name: test-data
volumes:
- name: test-data
persistentVolumeClaim:
claimName: pvc-sio-small
```
Notice that the `claimName:` attribute refers to the name of the PVC defined and deployed earlier. Next, let us deploy the file.
```
$> kubectl create -f examples/volumes/scaleio/pod-sc-pvc.yaml
```
We can now verify that the new pod is deployed OK.
```
kubectl get pod
NAME READY STATUS RESTARTS AGE
pod-0 1/1 Running 0 23m
pod-sio-small 1/1 Running 0 5s
```
You can use the ScaleIO dashboard to verify that the new volume has one attachment. You can verify the volume information for the pod:
```
$> kubectl describe pod pod-sio-small
...
Volumes:
test-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pvc-sio-small
ReadOnly: false
...
```
Lastly, you can see the volume's attachment on the Kubernetes node:
```
$> lsblk
...
scinia 252:0 0 8G 0 disk /var/lib/kubelet/pods/135986c7-dcb7-11e6-9fbf-080027c990a7/volumes/kubernetes.io~scaleio/vol-0
scinib 252:16 0 16G 0 disk /var/lib/kubelet/pods/62db442e-dcba-11e6-9fbf-080027c990a7/volumes/kubernetes.io~scaleio/sio-5fc9154ddcae11e68db708002
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/scaleio/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/volumes/scaleio/README.md](https://github.com/kubernetes/examples/blob/master/staging/volumes/scaleio/README.md)

View File

@ -1,475 +1 @@
# StorageOS Volume
- [StorageOS](#storageos)
- [Prerequisites](#prerequisites)
- [Examples](#examples)
- [Pre-provisioned Volumes](#pre-provisioned)
- [Pod](#pod)
- [Persistent Volumes](#persistent-volumes)
- [Dynamic Provisioning](#dynamic-provisioning)
- [Storage Class](#storage-class)
- [API Configuration](#api-configuration)
## StorageOS
[StorageOS](https://www.storageos.com) can be used as a storage provider for your Kubernetes cluster. StorageOS runs as a container within your Kubernetes environment, making local storage accessible from any node within the Kubernetes cluster. Data can be replicated to protect against node failure.
At its core, StorageOS provides block storage. You may choose the filesystem type to install to make devices usable from within containers.
## Prerequisites
The StorageOS container must be running on each Kubernetes node that wants to contribute storage or that wants to consume storage. For more information on how you can run StorageOS, consult the [StorageOS documentation](https://docs.storageos.com).
## API Configuration
The StorageOS provider has been pre-configured to use the StorageOS API defaults, and no additional configuration is required for testing. If you have changed the API port, or have removed the default account or changed its password (recommended), you must specify the new settings. This is done using Kubernetes [Secrets](../../../docs/user-guide/secrets/).
API configuration is set by using Kubernetes secrets. The configuration secret supports the following parameters:
* `apiAddress`: The address of the StorageOS API. This is optional and defaults to `tcp://localhost:5705`, which should be correct if the StorageOS container is running using the default settings.
* `apiUsername`: The username to authenticate to the StorageOS API with.
* `apiPassword`: The password to authenticate to the StorageOS API with.
* `apiVersion`: Optional, string value defaulting to `1`. Only set this if requested in StorageOS documentation.
Mutiple credentials can be used by creating different secrets.
For Persistent Volumes, secrets must be created in the Pod namespace. Specify the secret name using the `secretName` parameter when attaching existing volumes in Pods or creating new persistent volumes.
For dynamically provisioned volumes using storage classes, the secret can be created in any namespace. Note that you would want this to be an admin-controlled namespace with restricted access to users. Specify the secret namespace as parameter `adminSecretNamespace` and name as parameter `adminSecretName` in storage classes.
Example spec:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: storageos-secret
type: "kubernetes.io/storageos"
data:
apiAddress: dGNwOi8vMTI3LjAuMC4xOjU3MDU=
apiUsername: c3RvcmFnZW9z
apiPassword: c3RvcmFnZW9z
```
Values for `apiAddress`, `apiUsername` and `apiPassword` can be generated with:
```bash
$ echo -n "tcp://127.0.0.1:5705" | base64
dGNwOi8vMTI3LjAuMC4xOjU3MDU=
```
Create the secret:
```bash
$ kubectl create -f storageos-secret.yaml
secret "storageos-secret" created
```
Verify the secret:
```bash
$ kubectl describe secret storageos-secret
Name: storageos-secret
Namespace: default
Labels: <none>
Annotations: <none>
Type: kubernetes.io/storageos
Data
====
apiAddress: 20 bytes
apiPassword: 8 bytes
apiUsername: 8 bytes
```
## Examples
These examples assume you have a running Kubernetes cluster with the StorageOS container running on each node, and that an API configuration secret called `storageos-secret` has been created in the `default` namespace.
### Pre-provisioned Volumes
#### Pod
Pods can be created that access volumes directly.
1. Create a volume using the StorageOS UI, CLI or API. Consult the [StorageOS documentation](https://docs.storageos.com) for details.
1. Create a pod that refers to the new volume. In this case the volume is named `redis-vol01`.
Example spec:
```yaml
apiVersion: v1
kind: Pod
metadata:
labels:
name: redis
role: master
name: test-storageos-redis
spec:
containers:
- name: master
image: kubernetes/redis:v1
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
resources:
limits:
cpu: "0.1"
volumeMounts:
- mountPath: /redis-master-data
name: redis-data
volumes:
- name: redis-data
storageos:
# This volume must already exist within StorageOS
volumeName: redis-vol01
# volumeNamespace is optional, and specifies the volume scope within
# StorageOS. If no namespace is provided, it will use the namespace
# of the pod. Set to `default` or leave blank if you are not using
# namespaces.
#volumeNamespace: test-storageos
# The filesystem type to format the volume with, if required.
fsType: ext4
# The secret name for API credentials
secretName: storageos-secret
```
[Download example](storageos-pod.yaml?raw=true)
Create the pod:
```bash
$ kubectl create -f examples/volumes/storageos/storageos-pod.yaml
```
Verify that the pod is running:
```bash
$ kubectl get pods test-storageos-redis
NAME READY STATUS RESTARTS AGE
test-storageos-redis 1/1 Running 0 30m
```
### Persistent Volumes
1. Create a volume using the StorageOS UI, CLI or API. Consult the [StorageOS documentation](https://docs.storageos.com) for details.
1. Create the persistent volume `redis-vol01`.
Example spec:
```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageos:
# This volume must already exist within StorageOS
volumeName: pv0001
# volumeNamespace is optional, and specifies the volume scope within
# StorageOS. Set to `default` or leave blank if you are not using
# namespaces.
#volumeNamespace: default
# The filesystem type to create on the volume, if required.
fsType: ext4
# The secret name for API credentials
secretName: storageos-secret
```
[Download example](storageos-pv.yaml?raw=true)
Create the persistent volume:
```bash
$ kubectl create -f examples/volumes/storageos/storageos-pv.yaml
```
Verify that the pv has been created:
```bash
$ kubectl describe pv pv0001
Name: pv0001
Labels: <none>
Annotations: <none>
StorageClass: fast
Status: Available
Claim:
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 5Gi
Message:
Source:
Type: StorageOS (a StorageOS Persistent Disk resource)
VolumeName: pv0001
VolumeNamespace:
FSType: ext4
ReadOnly: false
Events: <none>
```
1. Create persistent volume claim
Example spec:
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc0001
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: fast
```
[Download example](storageos-pvc.yaml?raw=true)
Create the persistent volume claim:
```bash
$ kubectl create -f examples/volumes/storageos/storageos-pvc.yaml
```
Verify that the pvc has been created:
```bash
$ kubectl describe pvc pvc0001
Name: pvc0001
Namespace: default
StorageClass: fast
Status: Bound
Volume: pv0001
Labels: <none>
Capacity: 5Gi
Access Modes: RWO
No events.
```
1. Create pod which uses the persistent volume claim
Example spec:
```yaml
apiVersion: v1
kind: Pod
metadata:
labels:
name: redis
role: master
name: test-storageos-redis-pvc
spec:
containers:
- name: master
image: kubernetes/redis:v1
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
resources:
limits:
cpu: "0.1"
volumeMounts:
- mountPath: /redis-master-data
name: redis-data
volumes:
- name: redis-data
persistentVolumeClaim:
claimName: pvc0001
```
[Download example](storageos-pvcpod.yaml?raw=true)
Create the pod:
```bash
$ kubectl create -f examples/volumes/storageos/storageos-pvcpod.yaml
```
Verify that the pod has been created:
```bash
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test-storageos-redis-pvc 1/1 Running 0 40s
```
### Dynamic Provisioning
Dynamic provisioning can be used to auto-create volumes when needed. They require a Storage Class, a Persistent Volume Claim, and a Pod.
#### Storage Class
Kubernetes administrators can use storage classes to define different types of storage made available within the cluster. Each storage class definition specifies a provisioner type and any parameters needed to access it, as well as any other configuration.
StorageOS supports the following storage class parameters:
* `pool`: The name of the StorageOS distributed capacity pool to provision the volume from. Uses the `default` pool which is normally present if not specified.
* `description`: The description to assign to volumes that were created dynamically. All volume descriptions will be the same for the storage class, but different storage classes can be used to allow descriptions for different use cases. Defaults to `Kubernetes volume`.
* `fsType`: The default filesystem type to request. Note that user-defined rules within StorageOS may override this value. Defaults to `ext4`.
* `adminSecretNamespace`: The namespace where the API configuration secret is located. Required if adminSecretName set.
* `adminSecretName`: The name of the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted.
1. Create storage class
Example spec:
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: sc-fast
provisioner: kubernetes.io/storageos
parameters:
pool: default
description: Kubernetes volume
fsType: ext4
adminSecretNamespace: default
adminSecretName: storageos-secret
```
[Download example](storageos-sc.yaml?raw=true)
Create the storage class:
```bash
$ kubectl create -f examples/volumes/storageos/storageos-sc.yaml
```
Verify the storage class has been created:
```bash
$ kubectl describe storageclass fast
Name: fast
IsDefaultClass: No
Annotations: <none>
Provisioner: kubernetes.io/storageos
Parameters: description=Kubernetes volume,fsType=ext4,pool=default,secretName=storageos-secret
No events.
```
1. Create persistent volume claim
Example spec:
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fast0001
spec:
storageClassName: fast
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
```
Create the persistent volume claim (pvc):
```bash
$ kubectl create -f examples/volumes/storageos/storageos-sc-pvc.yaml
```
Verify the pvc has been created:
```bash
$ kubectl describe pvc fast0001
Name: fast0001
Namespace: default
StorageClass: fast
Status: Bound
Volume: pvc-480952e7-f8e0-11e6-af8c-08002736b526
Labels: <none>
Capacity: 5Gi
Access Modes: RWO
Events:
<snip>
```
A new persistent volume will also be created and bound to the pvc:
```bash
$ kubectl describe pv pvc-480952e7-f8e0-11e6-af8c-08002736b526
Name: pvc-480952e7-f8e0-11e6-af8c-08002736b526
Labels: storageos.driver=filesystem
StorageClass: fast
Status: Bound
Claim: default/fast0001
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 5Gi
Message:
Source:
Type: StorageOS (a StorageOS Persistent Disk resource)
VolumeName: pvc-480952e7-f8e0-11e6-af8c-08002736b526
Namespace: default
FSType: ext4
ReadOnly: false
No events.
```
1. Create pod which uses the persistent volume claim
Example spec:
```yaml
apiVersion: v1
kind: Pod
metadata:
labels:
name: redis
role: master
name: test-storageos-redis-sc-pvc
spec:
containers:
- name: master
image: kubernetes/redis:v1
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
resources:
limits:
cpu: "0.1"
volumeMounts:
- mountPath: /redis-master-data
name: redis-data
volumes:
- name: redis-data
persistentVolumeClaim:
claimName: fast0001
```
[Download example](storageos-sc-pvcpod.yaml?raw=true)
Create the pod:
```bash
$ kubectl create -f examples/volumes/storageos/storageos-sc-pvcpod.yaml
```
Verify that the pod has been created:
```bash
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test-storageos-redis-sc-pvc 1/1 Running 0 44s
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/storageos/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/volumes/storageos/README.md](https://github.com/kubernetes/examples/blob/master/staging/volumes/storageos/README.md)

View File

@ -1,674 +1 @@
# vSphere Volume
- [Prerequisites](#prerequisites)
- [Examples](#examples)
- [Volumes](#volumes)
- [Persistent Volumes](#persistent-volumes)
- [Storage Class](#storage-class)
- [Storage Policy Management inside kubernetes] (#storage-policy-management-inside-kubernetes)
- [Using existing vCenter SPBM policy] (#using-existing-vcenter-spbm-policy)
- [Virtual SAN policy support](#virtual-san-policy-support)
- [Stateful Set](#stateful-set)
## Prerequisites
- Kubernetes with vSphere Cloud Provider configured.
For cloudprovider configuration please refer [vSphere getting started guide](http://kubernetes.io/docs/getting-started-guides/vsphere/).
## Examples
### Volumes
1. Create VMDK.
First ssh into ESX and then use following command to create vmdk,
```shell
vmkfstools -c 2G /vmfs/volumes/datastore1/volumes/myDisk.vmdk
```
2. Create Pod which uses 'myDisk.vmdk'.
See example
```yaml
apiVersion: v1
kind: Pod
metadata:
name: test-vmdk
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-vmdk
name: test-volume
volumes:
- name: test-volume
# This VMDK volume must already exist.
vsphereVolume:
volumePath: "[datastore1] volumes/myDisk"
fsType: ext4
```
[Download example](vsphere-volume-pod.yaml?raw=true)
Creating the pod:
``` bash
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pod.yaml
```
Verify that pod is running:
```bash
$ kubectl get pods test-vmdk
NAME READY STATUS RESTARTS AGE
test-vmdk 1/1 Running 0 48m
```
### Persistent Volumes
1. Create VMDK.
First ssh into ESX and then use following command to create vmdk,
```shell
vmkfstools -c 2G /vmfs/volumes/datastore1/volumes/myDisk.vmdk
```
2. Create Persistent Volume.
See example:
```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
vsphereVolume:
volumePath: "[datastore1] volumes/myDisk"
fsType: ext4
```
In the above example datastore1 is located in the root folder. If datastore is member of Datastore Cluster or located in sub folder, the folder path needs to be provided in the VolumePath as below.
```yaml
vsphereVolume:
VolumePath: "[DatastoreCluster/datastore1] volumes/myDisk"
```
[Download example](vsphere-volume-pv.yaml?raw=true)
Creating the persistent volume:
``` bash
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pv.yaml
```
Verifying persistent volume is created:
``` bash
$ kubectl describe pv pv0001
Name: pv0001
Labels: <none>
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 2Gi
Message:
Source:
Type: vSphereVolume (a Persistent Disk resource in vSphere)
VolumePath: [datastore1] volumes/myDisk
FSType: ext4
No events.
```
3. Create Persistent Volume Claim.
See example:
```yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc0001
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
```
[Download example](vsphere-volume-pvc.yaml?raw=true)
Creating the persistent volume claim:
``` bash
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvc.yaml
```
Verifying persistent volume claim is created:
``` bash
$ kubectl describe pvc pvc0001
Name: pvc0001
Namespace: default
Status: Bound
Volume: pv0001
Labels: <none>
Capacity: 2Gi
Access Modes: RWO
No events.
```
3. Create Pod which uses Persistent Volume Claim.
See example:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: pvpod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/test-webserver
volumeMounts:
- name: test-volume
mountPath: /test-vmdk
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: pvc0001
```
[Download example](vsphere-volume-pvcpod.yaml?raw=true)
Creating the pod:
``` bash
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvcpod.yaml
```
Verifying pod is created:
``` bash
$ kubectl get pod pvpod
NAME READY STATUS RESTARTS AGE
pvpod 1/1 Running 0 48m
```
### Storage Class
__Note: Here you don't need to create vmdk it is created for you.__
1. Create Storage Class.
Example 1:
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
fstype: ext3
```
[Download example](vsphere-volume-sc-fast.yaml?raw=true)
You can also specify the datastore in the Storageclass as shown in example 2. The volume will be created on the datastore specified in the storage class.
This field is optional. If not specified as shown in example 1, the volume will be created on the datastore specified in the vsphere config file used to initialize the vSphere Cloud Provider.
Example 2:
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
datastore: VSANDatastore
```
If datastore is member of DataStore Cluster or within some sub folder, the datastore folder path needs to be provided in the datastore parameter as below.
```yaml
parameters:
datastore: DatastoreCluster/VSANDatastore
```
[Download example](vsphere-volume-sc-with-datastore.yaml?raw=true)
Creating the storageclass:
``` bash
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-sc-fast.yaml
```
Verifying storage class is created:
``` bash
$ kubectl describe storageclass fast
Name: fast
IsDefaultClass: No
Annotations: <none>
Provisioner: kubernetes.io/vsphere-volume
Parameters: diskformat=zeroedthick,fstype=ext3
No events.
```
2. Create Persistent Volume Claim.
See example:
```yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvcsc001
annotations:
volume.beta.kubernetes.io/storage-class: fast
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
```
[Download example](vsphere-volume-pvcsc.yaml?raw=true)
Creating the persistent volume claim:
``` bash
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvcsc.yaml
```
Verifying persistent volume claim is created:
``` bash
$ kubectl describe pvc pvcsc001
Name: pvcsc001
Namespace: default
StorageClass: fast
Status: Bound
Volume: pvc-83295256-f8e0-11e6-8263-005056b2349c
Labels: <none>
Capacity: 2Gi
Access Modes: RWO
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 persistentvolume-controller Normal ProvisioningSucceeded Successfully provisioned volume pvc-83295256-f8e0-11e6-8263-005056b2349c using kubernetes.io/vsphere-volume
```
Persistent Volume is automatically created and is bounded to this pvc.
Verifying persistent volume claim is created:
``` bash
$ kubectl describe pv pvc-83295256-f8e0-11e6-8263-005056b2349c
Name: pvc-83295256-f8e0-11e6-8263-005056b2349c
Labels: <none>
StorageClass: fast
Status: Bound
Claim: default/pvcsc001
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 2Gi
Message:
Source:
Type: vSphereVolume (a Persistent Disk resource in vSphere)
VolumePath: [datastore1] kubevols/kubernetes-dynamic-pvc-83295256-f8e0-11e6-8263-005056b2349c.vmdk
FSType: ext3
No events.
```
__Note: VMDK is created inside ```kubevols``` folder in datastore which is mentioned in 'vsphere' cloudprovider configuration.
The cloudprovider config is created during setup of Kubernetes cluster on vSphere.__
3. Create Pod which uses Persistent Volume Claim with storage class.
See example:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: pvpod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/test-webserver
volumeMounts:
- name: test-volume
mountPath: /test-vmdk
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: pvcsc001
```
[Download example](vsphere-volume-pvcscpod.yaml?raw=true)
Creating the pod:
``` bash
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvcscpod.yaml
```
Verifying pod is created:
``` bash
$ kubectl get pod pvpod
NAME READY STATUS RESTARTS AGE
pvpod 1/1 Running 0 48m
```
### Storage Policy Management inside kubernetes
#### Using existing vCenter SPBM policy
Admins can use the existing vCenter Storage Policy Based Management (SPBM) policy to configure a persistent volume with the SPBM policy.
__Note: Here you don't need to create persistent volume it is created for you.__
1. Create Storage Class.
Example 1:
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
storagePolicyName: gold
```
[Download example](vsphere-volume-spbm-policy.yaml?raw=true)
The admin specifies the SPBM policy - "gold" as part of storage class definition for dynamic volume provisioning. When a PVC is created, the persistent volume will be provisioned on a compatible datastore with maximum free space that satisfies the "gold" storage policy requirements.
Example 2:
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
storagePolicyName: gold
datastore: VSANDatastore
```
[Download example](vsphere-volume-spbm-policy-with-datastore.yaml?raw=true)
The admin can also specify a custom datastore where he wants the volume to be provisioned along with the SPBM policy name. When a PVC is created, the vSphere Cloud Provider checks if the user specified datastore satisfies the "gold" storage policy requirements. If yes, it will provision the persistent volume on user specified datastore. If not, it will error out to the user that the user specified datastore is not compatible with "gold" storage policy requirements.
#### Virtual SAN policy support
Vsphere Infrastructure(VI) Admins will have the ability to specify custom Virtual SAN Storage Capabilities during dynamic volume provisioning. You can now define storage requirements, such as performance and availability, in the form of storage capabilities during dynamic volume provisioning. The storage capability requirements are converted into a Virtual SAN policy which are then pushed down to the Virtual SAN layer when a persistent volume (virtual disk) is being created. The virtual disk is distributed across the Virtual SAN datastore to meet the requirements.
The official [VSAN policy documentation](https://pubs.vmware.com/vsphere-65/index.jsp?topic=%2Fcom.vmware.vsphere.virtualsan.doc%2FGUID-08911FD3-2462-4C1C-AE81-0D4DBC8F7990.html) describes in detail about each of the individual storage capabilities that are supported by VSAN. The user can specify these storage capabilities as part of storage class defintion based on his application needs.
The policy settings can be one or more of the following:
* *hostFailuresToTolerate*: represents NumberOfFailuresToTolerate
* *diskStripes*: represents NumberofDiskStripesPerObject
* *objectSpaceReservation*: represents ObjectSpaceReservation
* *cacheReservation*: represents FlashReadCacheReservation
* *iopsLimit*: represents IOPSLimitForObject
* *forceProvisioning*: represents if volume must be Force Provisioned
__Note: Here you don't need to create persistent volume it is created for you.__
1. Create Storage Class.
Example 1:
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
hostFailuresToTolerate: "2"
cachereservation: "20"
```
[Download example](vsphere-volume-sc-vsancapabilities.yaml?raw=true)
Here a persistent volume will be created with the Virtual SAN capabilities - hostFailuresToTolerate to 2 and cachereservation is 20% read cache reserved for storage object. Also the persistent volume will be *zeroedthick* disk.
The official [VSAN policy documentation](https://pubs.vmware.com/vsphere-65/index.jsp?topic=%2Fcom.vmware.vsphere.virtualsan.doc%2FGUID-08911FD3-2462-4C1C-AE81-0D4DBC8F7990.html) describes in detail about each of the individual storage capabilities that are supported by VSAN and can be configured on the virtual disk.
You can also specify the datastore in the Storageclass as shown in example 2. The volume will be created on the datastore specified in the storage class.
This field is optional. If not specified as shown in example 1, the volume will be created on the datastore specified in the vsphere config file used to initialize the vSphere Cloud Provider.
Example 2:
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
datastore: VSANDatastore
hostFailuresToTolerate: "2"
cachereservation: "20"
```
[Download example](vsphere-volume-sc-vsancapabilities-with-datastore.yaml?raw=true)
__Note: If you do not apply a storage policy during dynamic provisioning on a VSAN datastore, it will use a default Virtual SAN policy.__
Creating the storageclass:
``` bash
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-sc-vsancapabilities.yaml
```
Verifying storage class is created:
``` bash
$ kubectl describe storageclass fast
Name: fast
Annotations: <none>
Provisioner: kubernetes.io/vsphere-volume
Parameters: diskformat=zeroedthick, hostFailuresToTolerate="2", cachereservation="20"
No events.
```
2. Create Persistent Volume Claim.
See example:
```yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvcsc-vsan
annotations:
volume.beta.kubernetes.io/storage-class: fast
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
```
[Download example](vsphere-volume-pvcsc.yaml?raw=true)
Creating the persistent volume claim:
``` bash
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvcsc.yaml
```
Verifying persistent volume claim is created:
``` bash
$ kubectl describe pvc pvcsc-vsan
Name: pvcsc-vsan
Namespace: default
Status: Bound
Volume: pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d
Labels: <none>
Capacity: 2Gi
Access Modes: RWO
No events.
```
Persistent Volume is automatically created and is bounded to this pvc.
Verifying persistent volume claim is created:
``` bash
$ kubectl describe pv pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d
Name: pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d
Labels: <none>
Status: Bound
Claim: default/pvcsc-vsan
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 2Gi
Message:
Source:
Type: vSphereVolume (a Persistent Disk resource in vSphere)
VolumePath: [VSANDatastore] kubevols/kubernetes-dynamic-pvc-80f7b5c1-94b6-11e6-a24f-005056a79d2d.vmdk
FSType: ext4
No events.
```
__Note: VMDK is created inside ```kubevols``` folder in datastore which is mentioned in 'vsphere' cloudprovider configuration.
The cloudprovider config is created during setup of Kubernetes cluster on vSphere.__
3. Create Pod which uses Persistent Volume Claim with storage class.
See example:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: pvpod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/test-webserver
volumeMounts:
- name: test-volume
mountPath: /test
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: pvcsc-vsan
```
[Download example](vsphere-volume-pvcscpod.yaml?raw=true)
Creating the pod:
``` bash
$ kubectl create -f examples/volumes/vsphere/vsphere-volume-pvcscpod.yaml
```
Verifying pod is created:
``` bash
$ kubectl get pod pvpod
NAME READY STATUS RESTARTS AGE
pvpod 1/1 Running 0 48m
```
### Stateful Set
vSphere volumes can be consumed by Stateful Sets.
1. Create a storage class that will be used by the ```volumeClaimTemplates``` of a Stateful Set.
See example:
```yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: thin-disk
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: thin
```
[Download example](simple-storageclass.yaml)
2. Create a Stateful set that consumes storage from the Storage Class created.
See example:
```yaml
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 14
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: gcr.io/google_containers/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
annotations:
volume.beta.kubernetes.io/storage-class: thin-disk
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
```
This will create Persistent Volume Claims for each replica and provision a volume for each claim if an existing volume could be bound to the claim.
[Download example](simple-statefulset.yaml)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/volumes/vsphere/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->
This file has moved to [https://github.com/kubernetes/examples/blob/master/staging/volumes/vsphere/README.md](https://github.com/kubernetes/examples/blob/master/staging/volumes/vsphere/README.md)