This example assumes that you have a basic understanding of kubernetes services and that you have forked the repository and [turned up a Kubernetes cluster](https://github.com/GoogleCloudPlatform/kubernetes#contents):
See the companion [Setup Kubernetes](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/guestbook/SETUP.md) for some quick notes on how to get started.
*If* you are running from source, replace commands such as "kubectl" below with calls to cluster/kubectl.sh.
Note: This redis-master is *not* highly available. Making it highly available would be a very interesting, but intricate exercise - redis doesn't actually support multi-master deployments at the time of this writing, so high availability would be a somewhat tricky thing implement, and might involve periodic serialization to disk, and so on.
Use (or just create) the file `examples/guestbook/redis-master-controller.json` which describes a single pod running a redis key-value server in a container:
Note that, although the redis server runs just with a single replica, we use replication controller to enforce that exactly one pod keeps running (e.g. in a event of node going down, the replication controller will ensure that the redis master gets restarted on a healthy node).
You'll see all kubernetes components, most importantly the redis master pod. It will also display the machine that the pod is running on once it gets placed (may take up to thirty seconds):
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0ffef9649265 dockerfile/redis:latest "redis-server /etc/r About a minute ago Up About a minute k8s_redis-master.767aef46_redis-master-controller-gb50a.default.api_4530d7b3-ae5d-11e4-bf77-42010af0d719_579ee964
(Note that initial `docker pull` may take a few minutes, depending on network conditions. You can monitor the status of this by running `journalctl -f -u docker` to check when the image is being downloaded. Of course, you can also run `journalctl -f -u kubelet` to see what state the kubelet is in as well during this time.
A Kubernetes 'service' is a named load balancer that proxies traffic to *one or more* containers. This is done using the *labels* metadata which we defined in the redis-master pod above. As mentioned, in redis there is only one master, but we nevertheless still want to create a service for it. Why? Because it gives us a deterministic way to route to the single master using an elastic IP.
Services find the containers to load balance based on pod labels.
The pod that you created in Step One has the label `name=redis-master`. The selector field of the service determines *which pods will receive the traffic* sent to the service, and the port and containerPort information defines what port the service proxy will run at.
Use the file `examples/guestbook/redis-master-service.json`:
This will cause all pods to see the redis master apparently running on <ip>:6379. The traffic flow from slaves to masters can be described in two steps, like so.
- A *redis slave* will connect to "port" on the *redis master service*
- Traffic will be forwarded from the service "port" (on the service node) to the *containerPort* on the pod which (a node the service listens to).
Although the redis master is a single pod, the redis read slaves are a 'replicated' pod. In Kubernetes, a replication controller is responsible for managing multiple instances of a replicated pod. The replicationController will automatically launch new Pods if the number of replicas falls (this is quite easy - and fun - to test, just kill the docker processes for your pods at will and watch them come back online on a new node shortly thereafter).
The redis slave configures itself by looking for the Kubernetes service environment variables in the container environment. In particular, the redis slave is started with the following command:
You might be curious about where the *REDIS_MASTER_SERVICE_HOST* is coming from. It is provided to this container when it is launched via the kubernetes services, which create environment variables (there is a well defined syntax for how service names get transformed to environment variable names in the documentation linked above).
Just like the master, we want to have a service to proxy connections to the read slaves. In this case, in addition to discovery, the slave service provides transparent load balancing to web app clients.
The service specification for the slaves is in `examples/guestbook/redis-slave-service.json`:
This time the selector for the service is `name=redis-slave`, because that identifies the pods running redis slaves. It may also be helpful to set labels on your service itself as we've done here to make it easy to locate them with the `cluster/kubectl.sh get services -l "label=value"` command.
This is a simple PHP server that is configured to talk to either the slave or master services depending on whether the request is a read or a write. It exposes a simple AJAX interface, and serves an angular-based UX. Like the redis read slaves it is a replicated service instantiated by a replication controller.
Once that's up (it may take ten to thirty seconds to create the pods) you can list the pods in the cluster, to verify that the master, slaves and frontends are running:
In GCE, you also may need to open the firewall for port 8000 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-minion`:
In other environments, you can get the service IP from looking at the output of `kubectl get pods,services`, and modify your firewall using standard tools and services (firewalld, iptables, selinux) which you are already familar with.
And of course, finally, if you are running Kubernetes locally, you can just visit http://localhost:8000.
If you are in a live kubernetes cluster, you can just kill the pods, using a script such as this (obviously, read through it and make sure you understand it before running it blindly, as it will kill several pods automatically for you).
the Guestbook example can fail for a variety of reasons, which makes it an effective test. Lets test the web app simply using *curl*, so we can see whats going on.
Before we proceed, what are some setup idiosyncracies that might cause the app to fail (or, appear to fail, when merely you have a *cold start* issue.
- running kubernetes from HEAD, in which case, there may be subtle bugs in the kubernetes core component interactions.
- running kubernetes with security turned on, in such a way that containers are restricted from doing their job.
- starting the kubernetes and not allowing enough time for all services and pods to come online, before doing testing.
To post a message (Note that this call *overwrites* the messages field), so it will be reset to just one entry.
When you go to localhost:8000, you might not see the page at all. Testing it with curl...
```shell
==> default: curl: (56) Recv failure: Connection reset by peer
```
This means the web frontend isn't up yet. Wait a while, possibly about 2 minutes or more, depending on your set up. Also, run a *watch* on docker ps, to see if containers are cycling on and off or not starting.
```watch
$> watch -n 1 docker ps
```
If you run this on a node to which the frontend is assigned, you will eventually see the frontend container turns on. At that point, this basic error will likely go away.
2) *When Redis can't connect to master*:
```shell
<b>Fatal error</b>: Uncaught exception 'Predis\Connection\ConnectionException' with message 'php_network_getaddresses: getaddrinfo failed: Name or service not known [tcp://:0]' in /vendor/predis/predis/lib/Predis/Connection/AbstractConnection.php:141
```
The fix: Make sure that environmental variables are being set correctly. In particular, the PHP containers need to be started with the environment variables for the redis master (i.e. REDIS_MASTER_SERVICE_HOST and REDIS_MASTER_PORT).
3) *Temporarily, while waiting for the app to come up* , you might see a few of these:
```shell
==> default: <br/>
==> default: <b>Fatal error</b>: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Error while reading line from the server [tcp://10.254.168.69:6379]' in /vendor/predis/predis/lib/Predis/Connection/AbstractConnection.php:141
```
The fix, just go get some coffee. When you come back, there is a good chance the service endpoint will eventually be up. If not, make sure its running and that the redis master / slave docker logs show something like this.
```shell
$> docker logs 26af6bd5ac12
...
[9] 20 Feb 23:47:51.015 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
[9] 20 Feb 23:47:51.015 * The server is now ready to accept connections on port 6379
[9] 20 Feb 23:47:52.005 * Connecting to MASTER 10.254.168.69:6379
[9] 20 Feb 23:47:52.005 * MASTER <-> SLAVE sync started
```
4) *When security issues cause redis writes to fail* you may have to run *docker logs* on the redis containers:
```shell
==> default: <b>Fatal error</b>: Uncaught exception 'Predis\ServerException' with message 'MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error.' in /vendor/predis/predis/lib/Predis/Client.php:282"
```
The fix is to setup SE Linux properly (don't just turn it off). Remember that you can also rebuild this entire app from scratch, using the dockerfiles, and modify while redeploying. Reach out on the mailing list if you need help doing so!