Merge pull request #8750 from pmorie/example-links

Make references to files in examples links
pull/6/head
Saad Ali 2015-05-26 14:08:37 -07:00
commit 2a15bf757d
11 changed files with 32 additions and 30 deletions

View File

@ -31,7 +31,7 @@ You should already have turned up a Kubernetes cluster. To get the most of this
The Celery task queue will need to communicate with the RabbitMQ broker. RabbitMQ will eventually appear on a separate pod, but since pods are ephemeral we need a service that can transparently route requests to RabbitMQ.
Use the file `examples/celery-rabbitmq/rabbitmq-service.yaml`:
Use the file [`examples/celery-rabbitmq/rabbitmq-service.yaml`](rabbitmq-service.yaml):
```yaml
apiVersion: v1beta3
@ -63,7 +63,7 @@ This service allows other pods to connect to the rabbitmq. To them, it will be s
## Step 2: Fire up RabbitMQ
A RabbitMQ broker can be turned up using the file `examples/celery-rabbitmq/rabbitmq-controller.yaml`:
A RabbitMQ broker can be turned up using the file [`examples/celery-rabbitmq/rabbitmq-controller.yaml`](rabbitmq-controller.yaml):
```yaml
apiVersion: v1beta3

View File

@ -39,7 +39,7 @@ $ cluster/kubectl.sh config set-context prod --namespace=production --cluster=${
### Step Two: Create backend replication controller in each namespace
Use the file `examples/cluster-dns/dns-backend-rc.yaml` to create a backend server replication controller in each namespace.
Use the file [`examples/cluster-dns/dns-backend-rc.yaml`](dns-backend-rc.yaml) to create a backend server replication controller in each namespace.
```shell
$ cluster/kubectl.sh config use-context dev
@ -66,7 +66,8 @@ dns-backend dns-backend ddysher/dns-backend name=dns-backend 1
### Step Three: Create backend service
Use the file `examples/cluster-dns/dns-backend-service.yaml` to create a service for the backend server.
Use the file [`examples/cluster-dns/dns-backend-service.yaml`](dns-backend-service.yaml) to create
a service for the backend server.
```shell
$ cluster/kubectl.sh config use-context dev
@ -93,7 +94,7 @@ dns-backend <none> name=dns-backend 10.0.35.246 8000/TCP
### Step Four: Create client pod in one namespace
Use the file `examples/cluster-dns/dns-frontend-pod.yaml` to create a client pod in dev namespace. The client pod will make a connection to backend and exit. Specifically, it tries to connect to address `http://dns-backend.development.kubernetes.local:8000`.
Use the file [`examples/cluster-dns/dns-frontend-pod.yaml`](dns-frontend-pod.yaml) to create a client pod in dev namespace. The client pod will make a connection to backend and exit. Specifically, it tries to connect to address `http://dns-backend.development.kubernetes.local:8000`.
```shell
$ cluster/kubectl.sh config use-context dev

View File

@ -9,7 +9,8 @@ The example assumes that you have already set up a Glusterfs server cluster and
Set up Glusterfs server cluster; install Glusterfs client package on the Kubernetes nodes. ([Guide](https://www.howtoforge.com/high-availability-storage-with-glusterfs-3.2.x-on-debian-wheezy-automatic-file-replication-mirror-across-two-storage-servers))
### Create endpoints
Here is a snippet of glusterfs-endpoints.json,
Here is a snippet of [glusterfs-endpoints.json](glusterfs-endpoints.json),
```
"addresses": [
{
@ -40,7 +41,7 @@ glusterfs-cluster 10.240.106.152:1,10.240.79.157:1
### Create a POD
The following *volume* spec in glusterfs-pod.json illustrates a sample configuration.
The following *volume* spec in [glusterfs-pod.json](glusterfs-pod.json) illustrates a sample configuration.
```js
{

View File

@ -48,7 +48,7 @@ One pattern this organization could follow is to partition the Kubernetes cluste
Let's create two new namespaces to hold our work.
Use the file `examples/kubernetes-namespaces/namespace-dev.json` which describes a development namespace:
Use the file [`examples/kubernetes-namespaces/namespace-dev.json`](namespace-dev.json) which describes a development namespace:
```js
{

View File

@ -20,7 +20,7 @@ echo ok > /tmp/health; sleep 10; rm -rf /tmp/health; sleep 600
so when Kubelet executes the health check 15 seconds (defined by initialDelaySeconds) after the container started, the check would fail.
The [http-liveness.yaml](./http-liveness.yaml) demonstrates the HTTP check.
The [http-liveness.yaml](http-liveness.yaml) demonstrates the HTTP check.
```
livenessProbe:
httpGet:

View File

@ -65,7 +65,7 @@ Running
-------
Now that you have containerized your Meteor app it's time to set up
your cluster. Edit `meteor-controller.json` and make sure the `image`
your cluster. Edit [`meteor-controller.json`](meteor-controller.json) and make sure the `image`
points to the container you just pushed to the Docker Hub or GCR.
As you may know, Meteor uses MongoDB, and we'll need to provide it a
@ -96,7 +96,7 @@ kubectl create -f meteor-controller.json
kubectl create -f meteor-service.json
```
Note that `meteor-service.json` creates an external load balancer, so
Note that [`meteor-service.json`](meteor-service.json) creates an external load balancer, so
your app should be available through the IP of that load balancer once
the Meteor pods are started. You can find the IP of your load balancer
by running:
@ -127,20 +127,20 @@ ENTRYPOINT MONGO_URL=mongodb://$MONGO_SERVICE_HOST:$MONGO_SERVICE_PORT /usr/loca
Here we can see the MongoDB host and port information being passed
into the Meteor app. The `MONGO_SERVICE...` environment variables are
set by Kubernetes, and point to the service named `mongo` specified in
`mongo-service.json`. See the [environment
docuementation](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/container-environment.md)
[`mongo-service.json`](mongo-service.json). See the [environment
documentation](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/container-environment.md)
for more details.
As you may know, Meteor uses long lasting connections, and requires
_sticky sessions_. With Kubernetes you can scale out your app easily
with session affinity. The `meteor-service.json` file contains
with session affinity. The [`meteor-service.json`](meteor-service.json) file contains
`"sessionAffinity": "ClientIP"`, which provides this for us. See the
[service
documentation](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md#portals-and-service-proxies)
for more information.
As mentioned above, the mongo container uses a volume which is mapped
to a persistant disk by Kubernetes. In `mongo-pod.json` the container
to a persistant disk by Kubernetes. In [`mongo-pod.json`](mongo-pod.json) the container
section specifies the volume:
```
"volumeMounts": [

View File

@ -63,7 +63,7 @@ Now that the persistent disks are defined, the Kubernetes pods can be launched.
### Start the Mysql pod
First, **edit `mysql.yaml`**, the mysql pod definition, to use a database password that you specify.
First, **edit [`mysql.yaml`](mysql.yaml)**, the mysql pod definition, to use a database password that you specify.
`mysql.yaml` looks like this:
```yaml
@ -133,7 +133,7 @@ We will specifically name the service `mysql`. This will let us leverage the su
So if we label our Kubernetes mysql service `mysql`, the wordpress pod will be able to use the Docker-links-compatible environment variables, defined by Kubernetes, to connect to the database.
The `mysql-service.yaml` file looks like this:
The [`mysql-service.yaml`](mysql-service.yaml) file looks like this:
```yaml
apiVersion: v1beta3
@ -167,7 +167,7 @@ $ <kubernetes>/cluster/kubectl.sh get services
## Start the WordPress Pod and Service
Once the mysql service is up, start the wordpress pod, specified in
`wordpress.yaml`. Before you start it, **edit `wordpress.yaml`** and **set the database password to be the same as you used in `mysql.yaml`**.
[`wordpress.yaml`](wordpress.yaml). Before you start it, **edit `wordpress.yaml`** and **set the database password to be the same as you used in `mysql.yaml`**.
Note that this config file also defines a volume, this one using the `wordpress-disk` persistent disk that you created.
```yaml
@ -216,7 +216,7 @@ $ <kubernetes>/cluster/kubectl.sh get pods
### Start the WordPress service
Once the wordpress pod is running, start its service, specified by `wordpress-service.yaml`.
Once the wordpress pod is running, start its service, specified by [`wordpress-service.yaml`](wordpress-service.yaml).
The service config file looks like this:

View File

@ -21,7 +21,7 @@ In the remaining part of this example we will assume that your instance is named
### Step Two: Turn up the phabricator
To start Phabricator server use the file `examples/phabricator/phabricator-controller.json` which describes a replication controller with a single pod running an Apache server with Phabricator PHP source:
To start Phabricator server use the file [`examples/phabricator/phabricator-controller.json`](phabricator-controller.json) which describes a replication controller with a single pod running an Apache server with Phabricator PHP source:
```js
{
@ -113,7 +113,7 @@ This is because the host on which this container is running is not authorized in
gcloud sql instances patch phabricator-db --authorized-networks 130.211.141.151
```
To automate this process and make sure that a proper host is authorized even if pod is rescheduled to a new machine we need a separate pod that periodically lists pods and authorizes hosts. Use the file `examples/phabricator/authenticator-controller.json`:
To automate this process and make sure that a proper host is authorized even if pod is rescheduled to a new machine we need a separate pod that periodically lists pods and authorizes hosts. Use the file [`examples/phabricator/authenticator-controller.json`](authenticator-controller.json):
```js
{
@ -169,7 +169,7 @@ NAME REGION ADDRESS STATUS
phabricator us-central1 107.178.210.6 RESERVED
```
Use the file `examples/phabricator/phabricator-service.json`:
Use the file [`examples/phabricator/phabricator-service.json`](phabricator-service.json):
```js
{

View File

@ -97,7 +97,7 @@ rethinkdb-admin db=influxdb db=rethinkdb,role=admin 10.0.131.19 8080
rethinkdb-driver db=influxdb db=rethinkdb 10.0.27.114 28015/TCP
```
We request for an external load balancer in the admin-service.yaml file:
We request for an external load balancer in the [admin-service.yaml](admin-service.yaml) file:
```
createExternalLoadBalancer: true

View File

@ -29,7 +29,7 @@ instructions for your platform.
The Master service is the master (or head) service for a Spark
cluster.
Use the `examples/spark/spark-master.json` file to create a pod running
Use the [`examples/spark/spark-master.json`](spark-master.json) file to create a pod running
the Master service.
```shell
@ -85,7 +85,7 @@ program.
The Spark workers need the Master service to be running.
Use the `examples/spark/spark-worker-controller.json` file to create a
Use the [`examples/spark/spark-worker-controller.json`](spark-worker-controller.json) file to create a
ReplicationController that manages the worker pods.
```shell

View File

@ -30,14 +30,14 @@ instructions for your platform.
ZooKeeper is a distributed coordination service that Storm uses as a
bootstrap and for state storage.
Use the `examples/storm/zookeeper.json` file to create a pod running
Use the [`examples/storm/zookeeper.json`](zookeeper.json) file to create a pod running
the ZooKeeper service.
```shell
$ kubectl create -f examples/storm/zookeeper.json
```
Then, use the `examples/storm/zookeeper-service.json` file to create a
Then, use the [`examples/storm/zookeeper-service.json`](zookeeper-service.json) file to create a
logical service endpoint that Storm can use to access the ZooKeeper
pod.
@ -74,14 +74,14 @@ imok
The Nimbus service is the master (or head) service for a Storm
cluster. It depends on a functional ZooKeeper service.
Use the `examples/storm/storm-nimbus.json` file to create a pod running
Use the [`examples/storm/storm-nimbus.json`](storm-nimbus.json) file to create a pod running
the Nimbus service.
```shell
$ kubectl create -f examples/storm/storm-nimbus.json
```
Then, use the `examples/storm/storm-nimbus-service.json` file to
Then, use the [`examples/storm/storm-nimbus-service.json`](storm-nimbus-service.json) file to
create a logical service endpoint that Storm workers can use to access
the Nimbus pod.
@ -115,7 +115,7 @@ the Nimbus service.
The Storm workers need both the ZooKeeper and Nimbus services to be
running.
Use the `examples/storm/storm-worker-controller.json` file to create a
Use the [`examples/storm/storm-worker-controller.json`](storm-worker-controller.json) file to create a
ReplicationController that manages the worker pods.
```shell