This example shows how to build a simple multi-tier web application using Kubernetes and Docker. It consists of a web frontend, a redis master for storage and a replicated set of redis slaves.
This example assumes that you have a working cluster (see the [Getting Started Guides](../../docs/getting-started-guides)).
A Google Container Engine specific version of this tutoriual can be found at [https://cloud.google.com/container-engine/docs/tutorials/guestbook](https://cloud.google.com/container-engine/docs/tutorials/guestbook).
Use the file `examples/guestbook-go/redis-master-controller.json` to create a [replication controller](../../docs/replication-controller.md) which manages a single [pod](../../docs/pods.md). The pod runs a redis key-value server in a container. Using a replication controller is the preferred way to launch long-running pods, even for 1 replica, so the pod will benefit from self-healing mechanism in Kubernetes.
Create the redis master replication controller in your Kubernetes cluster using the `kubectl` CLI and the file that specifies the replication controller [examples/guestbook-go/redis-master-controller.json](redis-master-controller.json):
A Kubernetes '[service](../../docs/services.md)' is a named load balancer that proxies traffic to one or more containers. The services in a Kubernetes cluster are discoverable inside other containers via environment variables or DNS. Services find the containers to load balance based on pod labels.
The pod that you created in Step One has the label `app=redis` and `role=master`. The selector field of the service determines which pods will receive the traffic sent to the service. Use the file [examples/guestbook-go/redis-master-service.json](redis-master-service.json) to create the service in the `kubectl` cli:
This will cause all new pods to see the redis master apparently running on `$REDIS_MASTER_SERVICE_HOST` at port 6379, or running on `redis-master:6379`. Once created, the service proxy on each node is configured to set up a proxy on the specified port (in this case port 6379).
### Step Three: Turn up the replicated slave pods.
Although the redis master is a single pod, the redis read slaves are a 'replicated' pod. In Kubernetes, a replication controller is responsible for managing multiple instances of a replicated pod.
The redis slave configures itself by looking for the redis-master service name:port pair. In particular, the redis slave is started with the following command:
Just like the master, we want to have a service to proxy connections to the read slaves. In this case, in addition to discovery, the slave service provides transparent load balancing to clients. The service specification for the slaves
is in [examples/guestbook-go/redis-slave-service.json](redis-slave-service.json)
This time the selector for the service is `app=redis,role=slave`, because that identifies the pods running redis slaves. It may also be helpful to set labels on your service itself--as we've done here--to make it easy to locate them later.
This is a simple Go net/http ([negroni](https://github.com/codegangsta/negroni) based) server that is configured to talk to either the slave or master services depending on whether the request is a read or a write. It exposes a simple JSON interface, and serves a jQuery-Ajax based UX. Like the redis read slaves it is a replicated service instantiated by a replication controller.
The pod is described in the file [examples/guestbook-go/guestbook-controller.json](guestbook-controller.json). Using this file, you can turn up your guestbook with:
Once that's up (it may take ten to thirty seconds to create the pods) you can list the pods in the cluster, to verify that the master, slaves and guestbook frontends are running:
Just like the others, you want a service to group your guestbook pods. The service specification for the guestbook is in [examples/guestbook-go/guestbook-service.json](guestbook-service.json). There's a twist this time - because we want it to be externally visible, we set `"type": "LoadBalancer"` for the service.
**NOTE:** You may need to open the firewall for port 3000 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-minion`:
For Google Container Engine clusters the nodes are tagged differently. See the [Google Container Engine Guestbook example](https://cloud.google.com/container-engine/docs/tutorials/guestbook).
When you visit the external IP address of the guestbook service in a browser you should see something like this:
You should delete the service which will remove any associated resources that were created e.g. load balancers, forwarding rules and target pools. All the resources (replication controllers and service) can be deleted with a single command: