Merge pull request #11639 from timstclair/docs

Detailed walkthrough docs cleanup
pull/6/head
Vish Kannan 2015-07-22 13:42:59 -07:00
commit 03efab5675
4 changed files with 16 additions and 16 deletions

View File

@ -52,7 +52,7 @@ Documentation for other releases can be found at
In addition to the imperative-style commands, such as `kubectl run` and `kubectl expose`, described [elsewhere](quick-start.md), Kubernetes supports declarative configuration. Often times, configuration files are preferable to imperative commands, since they can be checked into version control and changes to the files can be code reviewed, which is especially important for more complex configurations, producing a more robust, reliable and archival system.
In the declarative style, all configuration is stored in YAML or JSON configuration files, using Kubernetes's API resource schemas as the configuration schemas. `kubectl` can create, update, delete, and get API resources. The `apiVersion` (currently “v1”), resource `kind`, and resource `name` are used by `kubectl` to construct the appropriate API path to invoke for the specified operation.
In the declarative style, all configuration is stored in YAML or JSON configuration files using Kubernetes's API resource schemas as the configuration schemas. `kubectl` can create, update, delete, and get API resources. The `apiVersion` (currently “v1”), resource `kind`, and resource `name` are used by `kubectl` to construct the appropriate API path to invoke for the specified operation.
## Launching a container using a configuration file
@ -154,7 +154,7 @@ hello-world 0/1 Pending 0 0s
Initially, a newly created pod is unscheduled -- no node has been selected to run it. Scheduling happens after creation, but is fast, so you normally shouldnt see pods in an unscheduled state unless theres a problem.
After the pod has been scheduled, the image may need to be pulled to the node on which it was scheduled, if it hadnt be pulled already. After a few seconds, you should see the container running:
After the pod has been scheduled, the image may need to be pulled to the node on which it was scheduled, if it hadnt been pulled already. After a few seconds, you should see the container running:
```console
$ kubectl get pods

View File

@ -92,7 +92,7 @@ my-nginx-6isf4 1/1 Running 0 2h e2e-test-beeps-minion-
my-nginx-t26zt 1/1 Running 0 2h e2e-test-beeps-minion-93ly
```
Check your pods ips:
Check your pods' IPs:
```console
$ kubectl get pods -l app=nginx -o json | grep podIP
@ -100,13 +100,13 @@ $ kubectl get pods -l app=nginx -o json | grep podIP
"podIP": "10.245.0.14",
```
You should be able to ssh into any node in your cluster and curl both ips. Note that the containers are *not* using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same containerPort and access them from any other pod or node in your cluster using ip. Like Docker, ports can still be published to the host node's interface(s), but the need for this is radically diminished because of the networking model.
You should be able to ssh into any node in your cluster and curl both IPs. Note that the containers are *not* using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same containerPort and access them from any other pod or node in your cluster using IP. Like Docker, ports can still be published to the host node's interface(s), but the need for this is radically diminished because of the networking model.
You can read more about [how we achieve this](../admin/networking.md#how-to-achieve-this) if youre curious.
## Creating a Service
So we have pods running nginx in a flat, cluster wide, address space. In theory, you could talk to these pods directly, but what happens when a node dies? The pods die with it, and the replication controller will create new ones, with different ips. This is the problem a Service solves.
So we have pods running nginx in a flat, cluster wide, address space. In theory, you could talk to these pods directly, but what happens when a node dies? The pods die with it, and the replication controller will create new ones, with different IPs. This is the problem a Service solves.
A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.
@ -137,7 +137,7 @@ NAME LABELS SELECTOR IP(S) PORT(S)
nginxsvc app=nginx app=nginx 10.0.116.146 80/TCP
```
As mentioned previously, a Service is backed by a group of pods. These pods are exposed through `endpoints`. The Service's selector will be evaluated continuously and the results will be POSTed to an Endpoints object also named `nginxsvc`. When a pod dies, it is automatically removed from the endpoints, and new pods matching the Services selector will automatically get added to the endpoints. Check the endpoints, and note that the ips are the same as the pods created in the first step:
As mentioned previously, a Service is backed by a group of pods. These pods are exposed through `endpoints`. The Service's selector will be evaluated continuously and the results will be POSTed to an Endpoints object also named `nginxsvc`. When a pod dies, it is automatically removed from the endpoints, and new pods matching the Services selector will automatically get added to the endpoints. Check the endpoints, and note that the IPs are the same as the pods created in the first step:
```console
$ kubectl describe svc nginxsvc
@ -157,7 +157,7 @@ NAME ENDPOINTS
nginxsvc 10.245.0.14:80,10.245.0.15:80
```
You should now be able to curl the nginx Service on `10.0.116.146:80` from any node in your cluster. Note that the Service ip is completely virtual, it never hits the wire, if youre curious about how this works you can read more about the [service proxy](services.md#virtual-ips-and-service-proxies).
You should now be able to curl the nginx Service on `10.0.116.146:80` from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire, if youre curious about how this works you can read more about the [service proxy](services.md#virtual-ips-and-service-proxies).
## Accessing the Service
@ -200,7 +200,7 @@ kube-dns <none> k8s-app=kube-dns 10.0.0.10 53/UDP
53/TCP
```
If it isnt running, you can [enable it](http://releases.k8s.io/HEAD/cluster/addons/dns/README.md#how-do-i-configure-it). The rest of this section will assume you have a Service with a long lived ip (nginxsvc), and a dns server that has assigned a name to that ip (the kube-dns cluster addon), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname). Lets create another pod to test this:
If it isnt running, you can [enable it](http://releases.k8s.io/HEAD/cluster/addons/dns/README.md#how-do-i-configure-it). The rest of this section will assume you have a Service with a long lived IP (nginxsvc), and a dns server that has assigned a name to that IP (the kube-dns cluster addon), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname). Lets create another pod to test this:
```yaml
$ cat curlpod.yaml
@ -326,7 +326,7 @@ node $ curl -k https://10.1.0.80
<h1>Welcome to nginx!</h1>
```
Note how we supplied the -k parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time,
Note how we supplied the `-k` parameter to curl in the last step, this is because we don't know anything about the pods running nginx at certificate generation time,
so we have to tell curl to ignore the CName mismatch. By creating a Service we linked the CName used in the certificate with the actual DNS name used by pods during Service lookup.
Lets test this from a pod (the same secret is being reused for simplicity, the pod only needs nginx.crt to access the Service):
@ -372,7 +372,7 @@ $ kubectl exec curlpod -- curl https://nginxsvc --cacert /etc/nginx/ssl/nginx.cr
## Exposing the Service
For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the last section already used `NodePort`, so your nginx https replica is ready to serve traffic on the internet if your node has a public ip.
For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the last section already used `NodePort`, so your nginx https replica is ready to serve traffic on the internet if your node has a public IP.
```console
$ kubeclt get svc nginxsvc -o json | grep -i nodeport -C 5

View File

@ -45,11 +45,11 @@ Documentation for other releases can be found at
<!-- END MUNGE: GENERATED_TOC -->
You previously read about how to quickly deploy a simple replicated application using [`kubectl run`](quick-start.md) and how to configure and launch single-run containers using pods (configuring-containers.md). Here, youll use the configuration-based approach to deploy a continuously running, replicated application.
You previously read about how to quickly deploy a simple replicated application using [`kubectl run`](quick-start.md) and how to configure and launch single-run containers using pods ([Configuring containers](configuring-containers.md)). Here youll use the configuration-based approach to deploy a continuously running, replicated application.
## Launching a set of replicas using a configuration file
Kubernetes creates and manages sets of replicated containers (actually, replicated [Pods](pods.md)) using [*Replication Controllers*](replication-controller.md).
Kubernetes creates and manages sets of replicated containers (actually, replicated [Pods](pods.md)) using [*Replication Controllers*](replication-controller.md).
A replication controller simply ensures that a specified number of pod "replicas" are running at any one time. If there are too many, it will kill some. If there are too few, it will start more. Its analogous to Google Compute Engines [Instance Group Manager](https://cloud.google.com/compute/docs/instance-groups/manager/) or AWSs [Auto-scaling Group](http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScalingGroup.html) (with no scaling policies).
@ -86,7 +86,7 @@ $ kubectl create -f ./nginx-rc.yaml
replicationcontrollers/my-nginx
```
Unlike in the case where you directly create pods, a replication controller replaces pods that are deleted or terminated for any reason, such as in the case of node failure. For this reason, we recommend that you use a replication controller for a continuously running application even if your application requires only a single pod, in which case you can omit `replicas` and it will default to a single replica.
Unlike in the case where you directly create pods, a replication controller replaces pods that are deleted or terminated for any reason, such as in the case of node failure. For this reason, we recommend that you use a replication controller for a continuously running application even if your application requires only a single pod, in which case you can omit `replicas` and it will default to a single replica.
## Viewing replication controller status

View File

@ -77,7 +77,7 @@ NAME LABELS SELECTOR IP(S) PORT(S)
nginx-http run=nginx-app run=nginx-app 80/TCP
```
With kubectl, we create a [replication controller](replication-controller.md) which will make sure that N pods are running nginx (where in is the number of replicas stated in the spec, which defaults to 1). We also create a [service](services.md) with a selector that matches the replication controller's selector. See the [quick-start.md](quick-start.md) guide for more information.
With kubectl, we create a [replication controller](replication-controller.md) which will make sure that N pods are running nginx (where N is the number of replicas stated in the spec, which defaults to 1). We also create a [service](services.md) with a selector that matches the replication controller's selector. See the [Quick start](quick-start.md) for more information.
#### docker ps
@ -163,7 +163,7 @@ $ kubectl logs -f nginx-app-zibvs
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
```
Now's a good time to mention slight difference between pods and containers; by default pods will not terminate if their process's exit. Instead it will restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated but for Kubernetes, each invokation is separate. To see the output from a prevoius run in Kubernetes, do this:
Now's a good time to mention slight difference between pods and containers; by default pods will not terminate if their processes exit. Instead it will restart the process. This is similar to the docker run option `--restart=always` with one major difference. In docker, the output for each invocation of the process is concatenated but for Kubernetes, each invokation is separate. To see the output from a prevoius run in Kubernetes, do this:
```console
$ kubectl logs --previous nginx-app-zibvs
@ -209,7 +209,7 @@ Notice that we don't delete the pod directly. With kubectl we want to delete the
#### docker login
There is no direct analog of 'docker login' in kubectl. If you are interested in using Kubernetes with a private registry, see [Using a Private Registry](images.md#using-a-private-registry).
There is no direct analog of `docker login` in kubectl. If you are interested in using Kubernetes with a private registry, see [Using a Private Registry](images.md#using-a-private-registry).
#### docker version