Replace ```shell with ```sh

pull/6/head
David Oppenheimer 2015-07-19 21:38:53 -07:00
parent 8cbe9c997a
commit dec9adfe2e
9 changed files with 68 additions and 68 deletions

View File

@ -74,7 +74,7 @@ Supported environments offer the following config flags, which are used at
cluster turn-up to create the SkyDNS pods and configure the kubelets. For
example, see `cluster/gce/config-default.sh`.
```shell
```sh
ENABLE_CLUSTER_DNS=true
DNS_SERVER_IP="10.0.0.10"
DNS_DOMAIN="cluster.local"

View File

@ -6,7 +6,7 @@ The exec healthz server is a sidecar container meant to serve as a liveness-exec
### Run the healthz server directly on localhost:
```shell
```sh
$ make server
$ ./exechealthz -cmd "ls /tmp/test"
$ curl http://localhost:8080/healthz
@ -20,7 +20,7 @@ ok
### Run the healthz server in a docker container:
The [docker daemon](https://docs.docker.com/userguide/) needs to be running on your host.
```shell
```sh
$ make container PREFIX=mycontainer/test
$ docker run -itP -p 8080:8080 mycontainer/test:0.0 -cmd "ls /tmp/test"
$ curl http://localhost:8080/healthz
@ -67,7 +67,7 @@ Create a pod.json that looks like:
```
And run the pod on your cluster using kubectl:
```shell
```sh
$ kubectl create -f pod.json
pods/simple
$ kubectl get pods -o wide
@ -76,7 +76,7 @@ simple 0/1 Pending 0 3s node
```
SSH into the node (note that the recommended way to access a server in a container is through a [service](../../docs/services.md), the example that follows is just to illustrate how the kubelet performs an http liveness probe):
```shell
```sh
node$ kubectl get pods simple -o json | grep podIP
"podIP": "10.1.0.2",

View File

@ -37,19 +37,19 @@ Create a volume in the same region as your node add your volume
information in the pod description file aws-ebs-web.yaml then create
the pod:
```shell
```sh
$ kubectl create -f examples/aws_ebs/aws-ebs-web.yaml
```
Add some data to the volume if is empty:
```shell
```sh
$ echo "Hello World" >& /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/{Region}/{Volume ID}/index.html
```
You should now be able to query your web server:
```shell
```sh
$ curl <Pod IP address>
$ Hello World
```

View File

@ -39,7 +39,7 @@ This is a toy example demonstrating how to use kubernetes DNS.
This example assumes that you have forked the repository and [turned up a Kubernetes cluster](../../docs/getting-started-guides/). Make sure DNS is enabled in your setup, see [DNS doc](../../cluster/addons/dns/).
```shell
```sh
$ cd kubernetes
$ hack/dev-build-and-up.sh
```
@ -48,14 +48,14 @@ $ hack/dev-build-and-up.sh
We'll see how cluster DNS works across multiple [namespaces](../../docs/user-guide/namespaces.md), first we need to create two namespaces:
```shell
```sh
$ kubectl create -f examples/cluster-dns/namespace-dev.yaml
$ kubectl create -f examples/cluster-dns/namespace-prod.yaml
```
Now list all namespaces:
```shell
```sh
$ kubectl get namespaces
NAME LABELS STATUS
default <none> Active
@ -65,7 +65,7 @@ production name=production Active
For kubectl client to work with each namespace, we define two contexts:
```shell
```sh
$ kubectl config set-context dev --namespace=development --cluster=${CLUSTER_NAME} --user=${USER_NAME}
$ kubectl config set-context prod --namespace=production --cluster=${CLUSTER_NAME} --user=${USER_NAME}
```
@ -76,14 +76,14 @@ You can view your cluster name and user name in kubernetes config at ~/.kube/con
Use the file [`examples/cluster-dns/dns-backend-rc.yaml`](dns-backend-rc.yaml) to create a backend server [replication controller](../../docs/user-guide/replication-controller.md) in each namespace.
```shell
```sh
$ kubectl config use-context dev
$ kubectl create -f examples/cluster-dns/dns-backend-rc.yaml
```
Once that's up you can list the pod in the cluster:
```shell
```sh
$ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
dns-backend dns-backend ddysher/dns-backend name=dns-backend 1
@ -91,7 +91,7 @@ dns-backend dns-backend ddysher/dns-backend name=dns-backend 1
Now repeat the above commands to create a replication controller in prod namespace:
```shell
```sh
$ kubectl config use-context prod
$ kubectl create -f examples/cluster-dns/dns-backend-rc.yaml
$ kubectl get rc
@ -104,14 +104,14 @@ dns-backend dns-backend ddysher/dns-backend name=dns-backend 1
Use the file [`examples/cluster-dns/dns-backend-service.yaml`](dns-backend-service.yaml) to create
a [service](../../docs/user-guide/services.md) for the backend server.
```shell
```sh
$ kubectl config use-context dev
$ kubectl create -f examples/cluster-dns/dns-backend-service.yaml
```
Once that's up you can list the service in the cluster:
```shell
```sh
$ kubectl get service dns-backend
NAME LABELS SELECTOR IP(S) PORT(S)
dns-backend <none> name=dns-backend 10.0.236.129 8000/TCP
@ -119,7 +119,7 @@ dns-backend <none> name=dns-backend 10.0.236.129 8000/TCP
Again, repeat the same process for prod namespace:
```shell
```sh
$ kubectl config use-context prod
$ kubectl create -f examples/cluster-dns/dns-backend-service.yaml
$ kubectl get service dns-backend
@ -131,14 +131,14 @@ dns-backend <none> name=dns-backend 10.0.35.246 8000/TCP
Use the file [`examples/cluster-dns/dns-frontend-pod.yaml`](dns-frontend-pod.yaml) to create a client [pod](../../docs/user-guide/pods.md) in dev namespace. The client pod will make a connection to backend and exit. Specifically, it tries to connect to address `http://dns-backend.development.cluster.local:8000`.
```shell
```sh
$ kubectl config use-context dev
$ kubectl create -f examples/cluster-dns/dns-frontend-pod.yaml
```
Once that's up you can list the pod in the cluster:
```shell
```sh
$ kubectl get pods dns-frontend
NAME READY STATUS RESTARTS AGE
dns-frontend 0/1 ExitCode:0 0 1m
@ -146,7 +146,7 @@ dns-frontend 0/1 ExitCode:0 0 1m
Wait until the pod succeeds, then we can see the output from the client pod:
```shell
```sh
$ kubectl logs dns-frontend
2015-05-07T20:13:54.147664936Z 10.0.236.129
2015-05-07T20:13:54.147721290Z Send request to: http://dns-backend.development.cluster.local:8000
@ -158,7 +158,7 @@ Please refer to the [source code](images/frontend/client.py) about the log. Firs
If we switch to prod namespace with the same pod config, we'll see the same result, i.e. dns will resolve across namespace.
```shell
```sh
$ kubectl config use-context prod
$ kubectl create -f examples/cluster-dns/dns-frontend-pod.yaml
$ kubectl logs dns-frontend

View File

@ -40,7 +40,7 @@ It uses an [nginx server block](http://wiki.nginx.org/ServerBlockExample) to ser
First generate a self signed rsa key and certificate that the server can use for TLS.
```shell
```sh
$ make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json
```

View File

@ -53,14 +53,14 @@ First, if you have not already done so:
Authenticate with gcloud and set the gcloud default project name to point to the project you want to use for your Kubernetes cluster:
```shell
```sh
gcloud auth login
gcloud config set project <project-name>
```
Next, start up a Kubernetes cluster:
```shell
```sh
wget -q -O - https://get.k8s.io | bash
```
@ -76,13 +76,13 @@ We will create two disks: one for the mysql pod, and one for the wordpress pod.
First create the mysql disk.
```shell
```sh
gcloud compute disks create --size=20GB --zone=$ZONE mysql-disk
```
Then create the wordpress disk.
```shell
```sh
gcloud compute disks create --size=20GB --zone=$ZONE wordpress-disk
```
@ -133,14 +133,14 @@ spec:
Note that we've defined a volume mount for `/var/lib/mysql`, and specified a volume that uses the persistent disk (`mysql-disk`) that you created.
Once you've edited the file to set your database password, create the pod as follows, where `<kubernetes>` is the path to your Kubernetes installation:
```shell
```sh
$ kubectl create -f examples/mysql-wordpress-pd/mysql.yaml
```
It may take a short period before the new pod reaches the `Running` state.
List all pods to see the status of this new pod and the cluster node that it is running on:
```shell
```sh
$ kubectl get pods
```
@ -149,7 +149,7 @@ $ kubectl get pods
You can take a look at the logs for a pod by using `kubectl.sh log`. For example:
```shell
```sh
$ kubectl logs mysql
```
@ -182,13 +182,13 @@ spec:
Start the service like this:
```shell
```sh
$ kubectl create -f examples/mysql-wordpress-pd/mysql-service.yaml
```
You can see what services are running via:
```shell
```sh
$ kubectl get services
```
@ -232,14 +232,14 @@ spec:
Create the pod:
```shell
```sh
$ kubectl create -f examples/mysql-wordpress-pd/wordpress.yaml
```
And list the pods to check that the status of the new pod changes
to `Running`. As above, this might take a minute.
```shell
```sh
$ kubectl get pods
```
@ -271,13 +271,13 @@ Note also that we've set the service port to 80. We'll return to that shortly.
Start the service:
```shell
```sh
$ kubectl create -f examples/mysql-wordpress-pd/wordpress-service.yaml
```
and see it in the list of services:
```shell
```sh
$ kubectl get services
```
@ -289,7 +289,7 @@ $ kubectl get services/wpfrontend --template="{{range .status.loadBalancer.ingre
or by listing the forwarding rules for your project:
```shell
```sh
$ gcloud compute forwarding-rules list
```
@ -299,7 +299,7 @@ Look for the rule called `wpfrontend`, which is what we named the wordpress serv
To access your new installation, you first may need to open up port 80 (the port specified in the wordpress service config) in the firewall for your cluster. You can do this, e.g. via:
```shell
```sh
$ gcloud compute firewall-rules create sample-http --allow tcp:80
```
@ -320,7 +320,7 @@ Set up your WordPress blog and play around with it a bit. Then, take down its p
If you are just experimenting, you can take down and bring up only the pods:
```shell
```sh
$ kubectl delete -f examples/mysql-wordpress-pd/wordpress.yaml
$ kubectl delete -f examples/mysql-wordpress-pd/mysql.yaml
```
@ -331,7 +331,7 @@ If you want to shut down the entire app installation, you can delete the service
If you are ready to turn down your Kubernetes cluster altogether, run:
```shell
```sh
$ cluster/kube-down.sh
```

View File

@ -51,7 +51,7 @@ OpenShift Origin creates privileged containers when running Docker builds during
If you are using a Salt based KUBERNETES_PROVIDER (**gce**, **vagrant**, **aws**), you should enable the
ability to create privileged containers via the API.
```shell
```sh
$ cd kubernetes
$ vi cluster/saltbase/pillar/privilege.sls
@ -61,14 +61,14 @@ allow_privileged: true
Now spin up a cluster using your preferred KUBERNETES_PROVIDER
```shell
```sh
$ export KUBERNETES_PROVIDER=gce
$ cluster/kube-up.sh
```
Next, let's setup some variables, and create a local folder that will hold generated configuration files.
```shell
```sh
$ export OPENSHIFT_EXAMPLE=$(pwd)/examples/openshift-origin
$ export OPENSHIFT_CONFIG=${OPENSHIFT_EXAMPLE}/config
$ mkdir ${OPENSHIFT_CONFIG}
@ -94,7 +94,7 @@ An external load balancer is needed to route traffic to our OpenShift master ser
Kubernetes cluster.
```shell
```sh
$ cluster/kubectl.sh create -f $OPENSHIFT_EXAMPLE/openshift-service.yaml
```
@ -107,7 +107,7 @@ build default certificates.
Grab the public IP address of the service we previously created.
```shell
```sh
$ export PUBLIC_IP=$(cluster/kubectl.sh get services openshift --template="{{ index .status.loadBalancer.ingress 0 \"ip\" }}")
$ echo $PUBLIC_IP
```
@ -116,7 +116,7 @@ Ensure you have a valid PUBLIC_IP address before continuing in the example.
We now need to run a command on your host to generate a proper OpenShift configuration. To do this, we will volume mount the configuration directory that holds your Kubernetes kubeconfig file from the prior step.
```shell
```sh
docker run --privileged -v ${OPENSHIFT_CONFIG}:/config openshift/origin start master --write-config=/config --kubeconfig='/config/kubeconfig' --master='https://localhost:8443' --public-master='https://${PUBLIC_IP}:8443'
```
@ -136,13 +136,13 @@ $ sudo -E chown -R ${USER} ${OPENSHIFT_CONFIG}
Then run the following command to collapse them into a Kubernetes secret.
```shell
```sh
docker run -i -t --privileged -e="OPENSHIFTCONFIG=/config/admin.kubeconfig" -v ${OPENSHIFT_CONFIG}:/config openshift/origin ex bundle-secret openshift-config -f /config &> ${OPENSHIFT_EXAMPLE}/secret.json
```
Now, lets create the secret in your Kubernetes cluster.
```shell
```sh
$ cluster/kubectl.sh create -f ${OPENSHIFT_EXAMPLE}/secret.json
```
@ -155,13 +155,13 @@ We are now ready to deploy OpenShift.
We will deploy a pod that runs the OpenShift master. The OpenShift master will delegate to the underlying Kubernetes
system to manage Kubernetes specific resources. For the sake of simplicity, the OpenShift master will run with an embedded etcd to hold OpenShift specific content. This demonstration will evolve in the future to show how to run etcd in a pod so that content is not destroyed if the OpenShift master fails.
```shell
```sh
$ cluster/kubectl.sh create -f ${OPENSHIFT_EXAMPLE}/openshift-controller.yaml
```
You should now get a pod provisioned whose name begins with openshift.
```shell
```sh
$ cluster/kubectl.sh get pods | grep openshift
$ cluster/kubectl.sh log openshift-t7147 origin
Running: cluster/../cluster/gce/../../cluster/../_output/dockerized/bin/linux/amd64/kubectl logs openshift-t7t47 origin
@ -171,7 +171,7 @@ Running: cluster/../cluster/gce/../../cluster/../_output/dockerized/bin/linux/am
Depending upon your cloud provider, you may need to open up an external firewall rule for tcp:8443. For GCE, you can run the following:
```shell
```sh
gcloud compute --project "your-project" firewall-rules create "origin" --allow tcp:8443 --network "your-network" --source-ranges "0.0.0.0/0"
```
@ -181,7 +181,7 @@ Open a browser and visit the OpenShift master public address reported in your lo
You can use the CLI commands by running the following:
```shell
```sh
$ docker run --privileged --entrypoint="/usr/bin/bash" -it -e="OPENSHIFTCONFIG=/config/admin.kubeconfig" -v ${OPENSHIFT_CONFIG}:/config openshift/origin
$ osc config use-context public-default
$ osc --help

View File

@ -48,13 +48,13 @@ Quick start
Rethinkdb will discover peer using endpoints provided by kubernetes service,
so first create a service so the following pod can query its endpoint
```shell
```sh
$kubectl create -f examples/rethinkdb/driver-service.yaml
```
check out:
```shell
```sh
$kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
[...]
@ -65,7 +65,7 @@ rethinkdb-driver db=influxdb db=rethinkdb 10.0.27.114 28015/TCP
start fist server in cluster
```shell
```sh
$kubectl create -f examples/rethinkdb/rc.yaml
```
@ -73,7 +73,7 @@ Actually, you can start servers as many as you want at one time, just modify the
check out again:
```shell
```sh
$kubectl get pods
NAME READY REASON RESTARTS AGE
[...]
@ -91,7 +91,7 @@ Scale
You can scale up you cluster using `kubectl scale`, and new pod will join to exsits cluster automatically, for example
```shell
```sh
$kubectl scale rc rethinkdb-rc --replicas=3
scaled
@ -108,14 +108,14 @@ Admin
You need a separate pod (labeled as role:admin) to access Web Admin UI
```shell
```sh
kubectl create -f examples/rethinkdb/admin-pod.yaml
kubectl create -f examples/rethinkdb/admin-service.yaml
```
find the service
```shell
```sh
$kubectl get se
NAME LABELS SELECTOR IP(S) PORT(S)
[...]

View File

@ -66,7 +66,7 @@ bootstrap and for state storage.
Use the [`examples/storm/zookeeper.json`](zookeeper.json) file to create a [pod](../../docs/user-guide/pods.md) running
the ZooKeeper service.
```shell
```sh
$ kubectl create -f examples/storm/zookeeper.json
```
@ -74,7 +74,7 @@ Then, use the [`examples/storm/zookeeper-service.json`](zookeeper-service.json)
logical service endpoint that Storm can use to access the ZooKeeper
pod.
```shell
```sh
$ kubectl create -f examples/storm/zookeeper-service.json
```
@ -83,7 +83,7 @@ before proceeding.
### Check to see if ZooKeeper is running
```shell
```sh
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
zookeeper 1/1 Running 0 43s
@ -91,7 +91,7 @@ zookeeper 1/1 Running 0 43s
### Check to see if ZooKeeper is accessible
```shell
```sh
$ kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
kubernetes component=apiserver,provider=kubernetes <none> 10.254.0.2 443
@ -109,7 +109,7 @@ cluster. It depends on a functional ZooKeeper service.
Use the [`examples/storm/storm-nimbus.json`](storm-nimbus.json) file to create a pod running
the Nimbus service.
```shell
```sh
$ kubectl create -f examples/storm/storm-nimbus.json
```
@ -117,7 +117,7 @@ Then, use the [`examples/storm/storm-nimbus-service.json`](storm-nimbus-service.
create a logical service endpoint that Storm workers can use to access
the Nimbus pod.
```shell
```sh
$ kubectl create -f examples/storm/storm-nimbus-service.json
```
@ -125,7 +125,7 @@ Ensure that the Nimbus service is running and functional.
### Check to see if Nimbus is running and accessible
```shell
```sh
$ kubectl get services
NAME LABELS SELECTOR IP(S) PORT(S)
kubernetes component=apiserver,provider=kubernetes <none> 10.254.0.2 443
@ -149,7 +149,7 @@ running.
Use the [`examples/storm/storm-worker-controller.json`](storm-worker-controller.json) file to create a
[replication controller](../../docs/user-guide/replication-controller.md) that manages the worker pods.
```shell
```sh
$ kubectl create -f examples/storm/storm-worker-controller.json
```
@ -158,7 +158,7 @@ $ kubectl create -f examples/storm/storm-worker-controller.json
One way to check on the workers is to get information from the
ZooKeeper service about how many clients it has.
```shell
```sh
$ echo stat | nc 10.254.139.141 2181; echo
Zookeeper version: 3.4.6--1, built on 10/23/2014 14:18 GMT
Clients: