k3s/examples/rethinkdb
Tim Hockin 39b86908a1 Run gendocs 2015-07-14 17:28:47 -07:00
..
image Examples/rethinkdb fixes - namespaces, api update, curl params 2015-07-07 23:58:55 +02:00
README.md Run gendocs 2015-07-14 17:28:47 -07:00
admin-pod.yaml Examples/rethinkdb fixes - namespaces, api update, curl params 2015-07-07 23:58:55 +02:00
admin-service.yaml Examples/rethinkdb fixes - namespaces, api update, curl params 2015-07-07 23:58:55 +02:00
driver-service.yaml Examples/rethinkdb fixes - namespaces, api update, curl params 2015-07-07 23:58:55 +02:00
gen-pod.sh Changing rethinkdb example to use v1 2015-06-10 14:54:42 -07:00
rc.yaml Fix typo in examples/rethinkdb/rc.yaml (google_containes->google_containers) 2015-07-08 11:53:23 +02:00

README.md

WARNING WARNING WARNING

PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost certainly want the docs that go with that version.

Documentation for specific releases can be found at releases.k8s.io.

WARNING WARNING WARNING

RethinkDB Cluster on Kubernetes

Setting up a rethinkdb cluster on kubernetes

Features

  • Auto configuration cluster by querying info from k8s
  • Simple

Quick start

Step 1

Rethinkdb will discover peer using endpoints provided by kubernetes service, so first create a service so the following pod can query its endpoint

$kubectl create -f driver-service.yaml

check out:

$kubectl get services
NAME               LABELS        SELECTOR       IP(S)         PORT(S)
[...]
rethinkdb-driver   db=influxdb   db=rethinkdb   10.0.27.114   28015/TCP

Step 2

start fist server in cluster

$kubectl create -f rc.yaml

Actually, you can start servers as many as you want at one time, just modify the replicas in rc.ymal

check out again:

$kubectl get pods
NAME                                                  READY     REASON    RESTARTS   AGE
[...]
rethinkdb-rc-r4tb0                                    1/1       Running   0          1m

Done!


Scale

You can scale up you cluster using kubectl scale, and new pod will join to exsits cluster automatically, for example

$kubectl scale rc rethinkdb-rc --replicas=3
scaled

$kubectl get pods
NAME                                                  READY     REASON    RESTARTS   AGE
[...]
rethinkdb-rc-f32c5                                    1/1       Running   0          1m
rethinkdb-rc-m4d50                                    1/1       Running   0          1m
rethinkdb-rc-r4tb0                                    1/1       Running   0          3m

Admin

You need a separate pod (labeled as role:admin) to access Web Admin UI

kubectl create -f admin-pod.yaml
kubectl create -f admin-service.yaml

find the service

$kubectl get se
NAME               LABELS        SELECTOR                  IP(S)            PORT(S)
[...]
rethinkdb-admin    db=influxdb   db=rethinkdb,role=admin   10.0.131.19      8080/TCP
                                                           104.197.19.120
rethinkdb-driver   db=influxdb   db=rethinkdb              10.0.27.114      28015/TCP

We request for an external load balancer in the admin-service.yaml file:

type: LoadBalancer

The external load balancer allows us to access the service from outside via an external IP, which is 104.197.19.120 in this case.

Note that you may need to create a firewall rule to allow the traffic, assuming you are using Google Compute Engine:

$ gcloud compute firewall-rules create rethinkdb --allow=tcp:8080

Now you can open a web browser and access to http://104.197.19.120:8080 to manage your cluster.

Why not just using pods in replicas?

This is because kube-proxy will act as a load balancer and send your traffic to different server, since the ui is not stateless when playing with Web Admin UI will cause Connection not open on server error.


BTW

  • gen_pod.sh is using to generate pod templates for my local cluster, the generated pods which is using nodeSelector to force k8s to schedule containers to my designate nodes, for I need to access persistent data on my host dirs. Note that one needs to label the node before 'nodeSelector' can work, see this tutorial

  • see antmanler/rethinkdb-k8s for detail

Analytics