mirror of https://github.com/k3s-io/k3s
132 lines
5.1 KiB
Markdown
132 lines
5.1 KiB
Markdown
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
|
|
|
<!-- BEGIN STRIP_FOR_RELEASE -->
|
|
|
|
<img src="http://kubernetes.io/kubernetes/img/warning.png" alt="WARNING"
|
|
width="25" height="25">
|
|
<img src="http://kubernetes.io/kubernetes/img/warning.png" alt="WARNING"
|
|
width="25" height="25">
|
|
<img src="http://kubernetes.io/kubernetes/img/warning.png" alt="WARNING"
|
|
width="25" height="25">
|
|
<img src="http://kubernetes.io/kubernetes/img/warning.png" alt="WARNING"
|
|
width="25" height="25">
|
|
<img src="http://kubernetes.io/kubernetes/img/warning.png" alt="WARNING"
|
|
width="25" height="25">
|
|
|
|
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
|
|
|
|
If you are using a released version of Kubernetes, you should
|
|
refer to the docs that go with that version.
|
|
|
|
<!-- TAG RELEASE_LINK, added by the munger automatically -->
|
|
<strong>
|
|
The latest release of this document can be found
|
|
[here](http://releases.k8s.io/release-1.4/examples/cockroachdb/README.md).
|
|
|
|
Documentation for other releases can be found at
|
|
[releases.k8s.io](http://releases.k8s.io).
|
|
</strong>
|
|
--
|
|
|
|
<!-- END STRIP_FOR_RELEASE -->
|
|
|
|
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
|
|
|
# CockroachDB on Kubernetes as a PetSet
|
|
|
|
This example deploys [CockroachDB](https://cockroachlabs.com) on Kubernetes as
|
|
a PetSet. CockroachDB is a distributed, scalable NewSQL database. Please see
|
|
[the homepage](https://cockroachlabs.com) and the
|
|
[documentation](https://www.cockroachlabs.com/docs/) for details.
|
|
|
|
## Limitations
|
|
|
|
### PetSet limitations
|
|
|
|
Standard PetSet limitations apply: There is currently no possibility to use
|
|
node-local storage (outside of single-node tests), and so there is likely
|
|
a performance hit associated with running CockroachDB on some external storage.
|
|
Note that CockroachDB already does replication and thus should not be deployed on
|
|
a persistent volume which already replicates internally.
|
|
High-performance use cases on a private Kubernetes cluster should consider
|
|
a DaemonSet deployment.
|
|
|
|
### Recovery after persistent storage failure
|
|
|
|
A persistent storage failure (e.g. losing the hard drive) is gracefully handled
|
|
by CockroachDB as long as enough replicas survive (two out of three by
|
|
default). Due to the bootstrapping in this deployment, a storage failure of the
|
|
first node is special in that the administrator must manually prepopulate the
|
|
"new" storage medium by running an instance of CockroachDB with the `--join`
|
|
parameter. If this is not done, the first node will bootstrap a new cluster,
|
|
which will lead to a lot of trouble.
|
|
|
|
### Dynamic provisioning
|
|
|
|
The deployment is written for a use case in which dynamic provisioning is
|
|
available. When that is not the case, the persistent volume claims need
|
|
to be created manually. See [minikube.sh](minikube.sh) for the necessary
|
|
steps.
|
|
|
|
## Testing locally on minikube
|
|
|
|
Follow the steps in [minikube.sh](minikube.sh) (or simply run that file).
|
|
|
|
## Accessing the database
|
|
|
|
Along with our PetSet configuration, we expose a standard Kubernetes service
|
|
that offers a load-balanced virtual IP for clients to access the database
|
|
with. In our example, we've called this service `cockroachdb-public`.
|
|
|
|
Start up a client pod and open up an interactive, (mostly) Postgres-flavor
|
|
SQL shell using:
|
|
|
|
```console
|
|
$ kubectl run -it cockroach-client --image=cockroachdb/cockroach --restart=Never --command -- bash
|
|
root@cockroach-client # ./cockroach sql --host cockroachdb-public
|
|
```
|
|
|
|
You can see example SQL statements for inserting and querying data in the
|
|
included [demo script](demo.sh), but can use almost any Postgres-style SQL
|
|
commands. Some more basic examples can be found within
|
|
[CockroachDB's documentation](https://www.cockroachlabs.com/docs/learn-cockroachdb-sql.html).
|
|
|
|
## Simulating failures
|
|
|
|
When all (or enough) nodes are up, simulate a failure like this:
|
|
|
|
```shell
|
|
kubectl exec cockroachdb-0 -- /bin/bash -c "while true; do kill 1; done"
|
|
```
|
|
|
|
You can then reconnect to the database as demonstrated above and verify
|
|
that no data was lost. The example runs with three-fold replication, so
|
|
it can tolerate one failure of any given node at a time. Note also that
|
|
there is a brief period of time immediately after the creation of the
|
|
cluster during which the three-fold replication is established, and during
|
|
which killing a node may lead to unavailability.
|
|
|
|
The [demo script](demo.sh) gives an example of killing one instance of the
|
|
database and ensuring the other replicas have all data that was written.
|
|
|
|
## Scaling up or down
|
|
|
|
Simply edit the PetSet (but note that you may need to create a new persistent
|
|
volume claim first). If you ran `minikube.sh`, there's a spare volume so you
|
|
can immediately scale up by one. Convince yourself that the new node
|
|
immediately serves reads and writes.
|
|
|
|
## Cleaning up when you're done
|
|
|
|
Because all of the resources in this example have been tagged with the label `app=cockroachdb`,
|
|
we can clean up everything that we created in one quick command using a selector on that label:
|
|
|
|
```shell
|
|
kubectl delete petsets,pods,persistentvolumes,persistentvolumeclaims,services -l app=cockroachdb
|
|
```
|
|
|
|
|
|
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
|
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/cockroachdb/README.md?pixel)]()
|
|
<!-- END MUNGE: GENERATED_ANALYTICS -->
|