Ensure all docs and examples in user guide are reachable

pull/6/head
Janet Kuo 2015-07-15 17:28:59 -07:00
parent 55e9356bf3
commit b0c68c4b81
30 changed files with 77 additions and 47 deletions

View File

@ -33,17 +33,25 @@ While the concepts and architecture in Kubernetes represent years of experience
Kubernetes works with the following concepts: Kubernetes works with the following concepts:
**Clusters** are the compute resources on top of which your containers are built. Kubernetes can run anywhere! See the [Getting Started Guides](docs/getting-started-guides) for instructions for a variety of services. [**Cluster**](docs/admin/README.md)
: A cluster is a set of physical or virtual machines and other infrastructure resources used by Kubernetes to run your applications. Kubernetes can run anywhere! See the [Getting Started Guides](docs/getting-started-guides) for instructions for a variety of services.
**Pods** are a colocated group of Docker containers with shared volumes. They're the smallest deployable units that can be created, scheduled, and managed with Kubernetes. Pods can be created individually, but it's recommended that you use a replication controller even if creating a single pod. [More about pods](docs/pods.md). [**Node**](docs/admin/node.md)
: A node is a physical or virtual machine running Kubernetes, onto which pods can be scheduled.
**Replication controllers** manage the lifecycle of pods. They ensure that a specified number of pods are running [**Pod**](docs/user-guide/pods.md)
at any given time, by creating or killing pods as required. [More about replication controllers](docs/replication-controller.md). : Pods are a colocated group of application containers with shared volumes. They're the smallest deployable units that can be created, scheduled, and managed with Kubernetes. Pods can be created individually, but it's recommended that you use a replication controller even if creating a single pod.
**Services** provide a single, stable name and address for a set of pods. [**Replication controller**](docs/user-guide/replication-controller.md)
They act as basic load balancers. [More about services](docs/services.md). : Replication controllers manage the lifecycle of pods. They ensure that a specified number of pods are running
at any given time, by creating or killing pods as required.
**Labels** are used to organize and select groups of objects based on key:value pairs. [More about labels](docs/labels.md). [**Service**](docs/user-guide/services.md)
: Services provide a single, stable name and address for a set of pods.
They act as basic load balancers.
[**Label**](docs/user-guide/labels.md)
: Labels are used to organize and select groups of objects based on key:value pairs.
## Documentation ## Documentation

View File

@ -102,7 +102,7 @@ This plug-in will observe the incoming request and ensure that it does not viola
enumerated in the ```ResourceQuota``` object in a ```Namespace```. If you are using ```ResourceQuota``` enumerated in the ```ResourceQuota``` object in a ```Namespace```. If you are using ```ResourceQuota```
objects in your Kubernetes deployment, you MUST use this plug-in to enforce quota constraints. objects in your Kubernetes deployment, you MUST use this plug-in to enforce quota constraints.
See the [resourceQuota design doc](../design/admission_control_resource_quota.md). See the [resourceQuota design doc](../design/admission_control_resource_quota.md) and the [example of Resource Quota](../user-guide/resourcequota/).
It is strongly encouraged that this plug-in is configured last in the sequence of admission control plug-ins. This is It is strongly encouraged that this plug-in is configured last in the sequence of admission control plug-ins. This is
so that quota is not prematurely incremented only for the request to be rejected later in admission control. so that quota is not prematurely incremented only for the request to be rejected later in admission control.
@ -113,7 +113,7 @@ This plug-in will observe the incoming request and ensure that it does not viola
enumerated in the ```LimitRange``` object in a ```Namespace```. If you are using ```LimitRange``` objects in enumerated in the ```LimitRange``` object in a ```Namespace```. If you are using ```LimitRange``` objects in
your Kubernetes deployment, you MUST use this plug-in to enforce those constraints. your Kubernetes deployment, you MUST use this plug-in to enforce those constraints.
See the [limitRange design doc](../design/admission_control_limit_range.md). See the [limitRange design doc](../design/admission_control_limit_range.md) and the [example of Limit Range](../user-guide/limitrange/).
### NamespaceExists ### NamespaceExists

View File

@ -37,6 +37,8 @@ Resource Quota is enforced in a particular namespace when there is a
`ResourceQuota` object in that namespace. There should be at most one `ResourceQuota` object in that namespace. There should be at most one
`ResourceQuota` object in a namespace. `ResourceQuota` object in a namespace.
See [ResourceQuota design doc](../design/admission_control_resource_quota.md) for more information.
## Object Count Quota ## Object Count Quota
The number of objects of a given type can be restricted. The following types The number of objects of a given type can be restricted. The following types
are supported: are supported:
@ -46,9 +48,9 @@ are supported:
| pods | Total number of pods | | pods | Total number of pods |
| services | Total number of services | | services | Total number of services |
| replicationcontrollers | Total number of replication controllers | | replicationcontrollers | Total number of replication controllers |
| resourcequotas | Total number of resource quotas | | resourcequotas | Total number of [resource quotas](admission-controllers.md#resourcequota) |
| secrets | Total number of secrets | | secrets | Total number of secrets |
| persistentvolumeclaims | Total number of persistent volume claims | | persistentvolumeclaims | Total number of [persistent volume claims](../user-guide/persistent-volumes.md#persistentvolumeclaims) |
For example, `pods` quota counts and enforces a maximum on the number of `pods` For example, `pods` quota counts and enforces a maximum on the number of `pods`
created in a single namespace. created in a single namespace.
@ -122,6 +124,9 @@ Such policies could be implemented using ResourceQuota as a building-block, by
writing a 'controller' which watches the quota usage and adjusts the quota writing a 'controller' which watches the quota usage and adjusts the quota
hard limits of each namespace. hard limits of each namespace.
## Example
See a [detailed example for how to use resource quota](../user-guide/resourcequota/).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/resource-quota.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/resource-quota.md?pixel)]()

View File

@ -153,6 +153,9 @@ It is expected we will want to define limits for particular pods or containers b
To make a **LimitRangeItem** more restrictive, we will intend to add these additional restrictions at a future point in time. To make a **LimitRangeItem** more restrictive, we will intend to add these additional restrictions at a future point in time.
## Example
See the [example of Limit Range](../user-guide/limitrange) for more information.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/admission_control_limit_range.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/admission_control_limit_range.md?pixel)]()

View File

@ -174,6 +174,9 @@ resourcequotas 1 1
services 3 5 services 3 5
``` ```
## More information
See [resource quota document](../admin/resource-quota.md) and the [example of Resource Quota](../user-guide/resourcequota) for more information.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/admission_control_resource_quota.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/admission_control_resource_quota.md?pixel)]()

View File

@ -28,7 +28,7 @@ This document proposes a model for managing persistent, cluster-scoped storage f
Two new API kinds: Two new API kinds:
A `PersistentVolume` (PV) is a storage resource provisioned by an administrator. It is analogous to a node. A `PersistentVolume` (PV) is a storage resource provisioned by an administrator. It is analogous to a node. See [Persistent Volume Guide](../user-guide/persistent-volumes/) for how to use it.
A `PersistentVolumeClaim` (PVC) is a user's request for a persistent volume to use in a pod. It is analogous to a pod. A `PersistentVolumeClaim` (PVC) is a user's request for a persistent volume to use in a pod. It is analogous to a pod.

View File

@ -23,8 +23,8 @@ certainly want the docs that go with that version.</h1>
## Abstract ## Abstract
A proposal for the distribution of secrets (passwords, keys, etc) to the Kubelet and to A proposal for the distribution of [secrets](../user-guide/secrets.md) (passwords, keys, etc) to the Kubelet and to
containers inside Kubernetes using a custom volume type. containers inside Kubernetes using a custom [volume](../user-guide/volumes.md#secrets) type. See the [secrets example](../user-guide/secrets/) for more information.
## Motivation ## Motivation

View File

@ -21,9 +21,9 @@ certainly want the docs that go with that version.</h1>
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
## Simple rolling update ## Simple rolling update
This is a lightweight design document for simple rolling update in ```kubectl``` This is a lightweight design document for simple [rolling update](../user-guide/kubectl/kubectl_rolling-update.md) in ```kubectl```.
Complete execution flow can be found [here](#execution-details). Complete execution flow can be found [here](#execution-details). See the [example of rolling update](../user-guide/update-demo/) for more information.
### Lightweight rollout ### Lightweight rollout
Assume that we have a current replication controller named ```foo``` and it is running image ```image:v1``` Assume that we have a current replication controller named ```foo``` and it is running image ```image:v1```

View File

@ -31,7 +31,7 @@ The purpose of filtering the nodes is to filter out the nodes that do not meet c
- `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of limits of all Pods on the node. - `PodFitsResources`: Check if the free resource (CPU and Memory) meets the requirement of the Pod. The free resource is measured by the capacity minus the sum of limits of all Pods on the node.
- `PodFitsPorts`: Check if any HostPort required by the Pod is already occupied on the node. - `PodFitsPorts`: Check if any HostPort required by the Pod is already occupied on the node.
- `PodFitsHost`: Filter out all nodes except the one specified in the PodSpec's NodeName field. - `PodFitsHost`: Filter out all nodes except the one specified in the PodSpec's NodeName field.
- `PodSelectorMatches`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field. - `PodSelectorMatches`: Check if the labels of the node match the labels specified in the Pod's `nodeSelector` field ([Here](../user-guide/node-selection/) is an example of how to use `nodeSelector` field).
- `CheckNodeLabelPresence`: Check if all the specified labels exist on a node or not, regardless of the value. - `CheckNodeLabelPresence`: Check if all the specified labels exist on a node or not, regardless of the value.
The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](../../plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](../../plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). The details of the above predicates can be found in [plugin/pkg/scheduler/algorithm/predicates/predicates.go](../../plugin/pkg/scheduler/algorithm/predicates/predicates.go). All predicates mentioned above can be used in combination to perform a sophisticated filtering policy. Kubernetes uses some, but not all, of these predicates by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](../../plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go).

View File

@ -162,7 +162,7 @@ NAME LABELS STATUS
``` ```
If the status of the node is ```NotReady``` or ```Unknown``` please check that all of the containers you created are successfully running. If the status of the node is ```NotReady``` or ```Unknown``` please check that all of the containers you created are successfully running.
If all else fails, ask questions on IRC at #google-containers. If all else fails, ask questions on IRC at [#google-containers](http://webchat.freenode.net/?channels=google-containers).
### Next steps ### Next steps

View File

@ -36,7 +36,7 @@ NAME LABELS STATUS
``` ```
If the status of any node is ```Unknown``` or ```NotReady``` your cluster is broken, double check that all containers are running properly, and if all else fails, contact us on IRC at If the status of any node is ```Unknown``` or ```NotReady``` your cluster is broken, double check that all containers are running properly, and if all else fails, contact us on IRC at
```#google-containers``` for advice. [```#google-containers```](http://webchat.freenode.net/?channels=google-containers) for advice.
### Run an application ### Run an application
```sh ```sh

View File

@ -89,7 +89,7 @@ cluster/kube-up.sh
If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `<kubernetes>/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster. If you want more than one cluster running in your project, want to use a different name, or want a different number of worker nodes, see the `<kubernetes>/cluster/gce/config-default.sh` file for more fine-grained configuration before you start up your cluster.
If you run into trouble, please see the section on [troubleshooting](gce.md#troubleshooting), post to the If you run into trouble, please see the section on [troubleshooting](gce.md#troubleshooting), post to the
[google-containers group](https://groups.google.com/forum/#!forum/google-containers), or come ask questions on IRC at #google-containers on freenode. [google-containers group](https://groups.google.com/forum/#!forum/google-containers), or come ask questions on IRC at [#google-containers](http://webchat.freenode.net/?channels=google-containers) on freenode.
The next few steps will show you: The next few steps will show you:

View File

@ -770,7 +770,7 @@ pinging or SSH-ing from one node to another.
### Getting Help ### Getting Help
If you run into trouble, please see the section on [troubleshooting](gce.md#troubleshooting), post to the If you run into trouble, please see the section on [troubleshooting](gce.md#troubleshooting), post to the
[google-containers group](https://groups.google.com/forum/#!forum/google-containers), or come ask questions on IRC at #google-containers on freenode. [google-containers group](https://groups.google.com/forum/#!forum/google-containers), or come ask questions on IRC at [#google-containers](http://webchat.freenode.net/?channels=google-containers) on freenode.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -64,6 +64,12 @@ If you don't have much familiarity with Kubernetes, we recommend you read the fo
[**Overview**](overview.md) [**Overview**](overview.md)
: A brief overview of Kubernetes concepts. : A brief overview of Kubernetes concepts.
[**Cluster**](../admin/README.md)
: A cluster is a set of physical or virtual machines and other infrastructure resources used by Kubernetes to run your applications.
[**Node**](../admin/node.md)
: A node is a physical or virtual machine running Kubernetes, onto which pods can be scheduled.
[**Pod**](pods.md) [**Pod**](pods.md)
: A pod is a co-located group of containers and volumes. : A pod is a co-located group of containers and volumes.
@ -107,6 +113,8 @@ If you don't have much familiarity with Kubernetes, we recommend you read the fo
* [Downward API: accessing system configuration from a pod](downward-api.md) * [Downward API: accessing system configuration from a pod](downward-api.md)
* [Images and registries](images.md) * [Images and registries](images.md)
* [Migrating from docker-cli to kubectl](docker-cli-to-kubectl.md) * [Migrating from docker-cli to kubectl](docker-cli-to-kubectl.md)
* [Assign pods to selected nodes](node-selection/)
* [Perform a rolling update on a running group of pods](update-demo/)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS --> <!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -104,7 +104,7 @@ Eventually, user specified reasons may be [added to the API](https://github.com/
### Hook Handler Execution ### Hook Handler Execution
When a management hook occurs, the management system calls into any registered hook handlers in the container for that hook.  These hook handler calls are synchronous in the context of the pod containing the container. Note:this means that hook handler execution blocks any further management of the pod.  If your hook handler blocks, no other management (including health checks) will occur until the hook handler completes.  Blocking hook handlers do *not* affect management of other Pods.  Typically we expect that users will make their hook handlers as lightweight as possible, but there are cases where long running commands make sense (e.g. saving state prior to container stop) When a management hook occurs, the management system calls into any registered hook handlers in the container for that hook.  These hook handler calls are synchronous in the context of the pod containing the container. Note:this means that hook handler execution blocks any further management of the pod.  If your hook handler blocks, no other management (including [health checks](production-pods.md#liveness-and-readiness-probes-aka-health-checks)) will occur until the hook handler completes.  Blocking hook handlers do *not* affect management of other Pods.  Typically we expect that users will make their hook handlers as lightweight as possible, but there are cases where long running commands make sense (e.g. saving state prior to container stop)
For hooks which have parameters, these parameters are passed to the event handler as a set of key/value pairs.  The details of this parameter passing is handler implementation dependent (see below). For hooks which have parameters, these parameters are passed to the event handler as a set of key/value pairs.  The details of this parameter passing is handler implementation dependent (see below).

View File

@ -26,7 +26,7 @@ For each container, the build steps are the same. The examples below
are for the `show` container. Replace `show` with `backend` for the are for the `show` container. Replace `show` with `backend` for the
backend container. backend container.
GCR Google Container Registry ([GCR](https://cloud.google.com/tools/container-registry/))
--- ---
docker build -t gcr.io/<project-name>/show . docker build -t gcr.io/<project-name>/show .
gcloud docker push gcr.io/<project-name>/show gcloud docker push gcr.io/<project-name>/show

View File

@ -47,7 +47,7 @@ This example demonstrates how limits can be applied to a Kubernetes namespace to
min/max resource limits per pod. In addition, this example demonstrates how you can min/max resource limits per pod. In addition, this example demonstrates how you can
apply default resource limits to pods in the absence of an end-user specified value. apply default resource limits to pods in the absence of an end-user specified value.
For a detailed description of the Kubernetes resource model, see [Resources](../../../docs/user-guide/compute-resources.md) See [LimitRange design doc](../../design/admission_control_limit_range.md) for more information. For a detailed description of the Kubernetes resource model, see [Resources](../../../docs/user-guide/compute-resources.md)
Step 0: Prerequisites Step 0: Prerequisites
----------------------------------------- -----------------------------------------

View File

@ -21,7 +21,7 @@ certainly want the docs that go with that version.</h1>
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
## Overview ## Overview
This example shows two types of pod health checks: HTTP checks and container execution checks. This example shows two types of pod [health checks](../production-pods.md#liveness-and-readiness-probes-aka-health-checks): HTTP checks and container execution checks.
The [exec-liveness.yaml](exec-liveness.yaml) demonstrates the container execution check. The [exec-liveness.yaml](exec-liveness.yaml) demonstrates the container execution check.
``` ```
@ -33,9 +33,9 @@ The [exec-liveness.yaml](exec-liveness.yaml) demonstrates the container executio
initialDelaySeconds: 15 initialDelaySeconds: 15
timeoutSeconds: 1 timeoutSeconds: 1
``` ```
Kubelet executes the command cat /tmp/health in the container and reports failure if the command returns a non-zero exit code. Kubelet executes the command `cat /tmp/health` in the container and reports failure if the command returns a non-zero exit code.
Note that the container removes the /tmp/health file after 10 seconds, Note that the container removes the `/tmp/health` file after 10 seconds,
``` ```
echo ok > /tmp/health; sleep 10; rm -rf /tmp/health; sleep 600 echo ok > /tmp/health; sleep 10; rm -rf /tmp/health; sleep 600
``` ```

View File

@ -27,7 +27,7 @@ describes a pod that just emits a log message once every 4 seconds. The pod spec
[synthetic_10lps.yaml](synthetic_10lps.yaml) [synthetic_10lps.yaml](synthetic_10lps.yaml)
describes a pod that just emits 10 log lines per second. describes a pod that just emits 10 log lines per second.
To observe the ingested log lines when using Google Cloud Logging please see the getting See [logging document](../logging.md) for more details about logging. To observe the ingested log lines when using Google Cloud Logging please see the getting
started instructions started instructions
at [Cluster Level Logging to Google Cloud Logging](../../../docs/getting-started-guides/logging.md). at [Cluster Level Logging to Google Cloud Logging](../../../docs/getting-started-guides/logging.md).
To observe the ingested log lines when using Elasticsearch and Kibana please see the getting To observe the ingested log lines when using Elasticsearch and Kibana please see the getting

View File

@ -27,8 +27,8 @@ Kubernetes components, such as kubelet and apiserver, use the [glog](https://god
## Examining the logs of running containers ## Examining the logs of running containers
The logs of a running container may be fetched using the command `kubectl logs`. For example, given The logs of a running container may be fetched using the command `kubectl logs`. For example, given
this pod specification which has a container which writes out some text to standard this pod specification [counter-pod.yaml](../../examples/blog-logging/counter-pod.yaml), which has a container which writes out some text to standard
output every second [counter-pod.yaml](../../examples/blog-logging/counter-pod.yaml): output every second. (You can find different pod specifications [here](logging-demo/).)
``` ```
apiVersion: v1 apiVersion: v1
kind: Pod kind: Pod

View File

@ -241,7 +241,7 @@ my-nginx-o0ef1 1/1 Running 0 1h
At some point, youll eventually need to update your deployed application, typically by specifying a new image or image tag, as in the canary deployment scenario above. `kubectl` supports several update operations, each of which is applicable to different scenarios. At some point, youll eventually need to update your deployed application, typically by specifying a new image or image tag, as in the canary deployment scenario above. `kubectl` supports several update operations, each of which is applicable to different scenarios.
To update a service without an outage, `kubectl` supports what is called [“rolling update”](kubectl/kubectl_rolling-update.md), which updates one pod at a time, rather than taking down the entire service at the same time. To update a service without an outage, `kubectl` supports what is called [“rolling update”](kubectl/kubectl_rolling-update.md), which updates one pod at a time, rather than taking down the entire service at the same time. See the [rolling update design document](../design/simple-rolling-update.md) and the [example of rolling update](update-demo/) for more information.
Lets say you were running version 1.7.9 of nginx: Lets say you were running version 1.7.9 of nginx:
```yaml ```yaml

View File

@ -88,13 +88,13 @@ Use the file [`namespace-dev.json`](namespace-dev.json) which describes a develo
Create the development namespace using kubectl. Create the development namespace using kubectl.
```shell ```shell
$ kubectl create -f docs/user-guide/kubernetes-namespaces/namespace-dev.json $ kubectl create -f docs/user-guide/namespaces/namespace-dev.json
``` ```
And then lets create the production namespace using kubectl. And then lets create the production namespace using kubectl.
```shell ```shell
$ kubectl create -f docs/user-guide/kubernetes-namespaces/namespace-prod.json $ kubectl create -f docs/user-guide/namespaces/namespace-prod.json
``` ```
To be sure things are right, let's list all of the namespaces in our cluster. To be sure things are right, let's list all of the namespaces in our cluster.

View File

@ -22,7 +22,7 @@ certainly want the docs that go with that version.</h1>
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
## Node selection example ## Node selection example
This example shows how to assign a pod to a specific node or to one of a set of nodes using node labels and the nodeSelector field in a pod specification. Generally this is unnecessary, as the scheduler will take care of things for you, but you may want to do so in certain circumstances like to ensure that your pod ends up on a machine with an SSD attached to it. This example shows how to assign a [pod](../pods.md) to a specific [node](../../admin/node.md) or to one of a set of nodes using node labels and the nodeSelector field in a pod specification. Generally this is unnecessary, as the scheduler will take care of things for you, but you may want to do so in certain circumstances like to ensure that your pod ends up on a machine with an SSD attached to it.
### Step Zero: Prerequisites ### Step Zero: Prerequisites

View File

@ -22,11 +22,13 @@ certainly want the docs that go with that version.</h1>
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# How To Use Persistent Volumes # How To Use Persistent Volumes
The purpose of this guide is to help you become familiar with Kubernetes Persistent Volumes. By the end of the guide, we'll have The purpose of this guide is to help you become familiar with [Kubernetes Persistent Volumes](../persistent-volumes.md). By the end of the guide, we'll have
nginx serving content from your persistent volume. nginx serving content from your persistent volume.
This guide assumes knowledge of Kubernetes fundamentals and that you have a cluster up and running. This guide assumes knowledge of Kubernetes fundamentals and that you have a cluster up and running.
See [Persistent Storage design document](../../design/persistent-storage.md) for more information.
## Provisioning ## Provisioning
A Persistent Volume (PV) in Kubernetes represents a real piece of underlying storage capacity in the infrastructure. Cluster administrators A Persistent Volume (PV) in Kubernetes represents a real piece of underlying storage capacity in the infrastructure. Cluster administrators
@ -114,7 +116,7 @@ I love Kubernetes storage!
``` ```
Hopefully this simple guide is enough to get you started with PersistentVolumes. If you have any questions, join Hopefully this simple guide is enough to get you started with PersistentVolumes. If you have any questions, join
```#google-containers``` on IRC and ask! [```#google-containers```](https://botbot.me/freenode/google-containers/) on IRC and ask!
Enjoy! Enjoy!

View File

@ -22,7 +22,7 @@ certainly want the docs that go with that version.</h1>
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
Resource Quota Resource Quota
======================================== ========================================
This example demonstrates how resource quota and limits can be applied to a Kubernetes namespace. This example demonstrates how [resource quota](../../admin/admission-controllers.md#resourcequota) and [limits](../../admin/admission-controllers.md#limitranger) can be applied to a Kubernetes namespace. See [ResourceQuota design doc](../../design/admission_control_resource_quota.md) for more information.
This example assumes you have a functional Kubernetes setup. This example assumes you have a functional Kubernetes setup.

View File

@ -25,7 +25,7 @@ certainly want the docs that go with that version.</h1>
Objects of type `secret` are intended to hold sensitive information, such as Objects of type `secret` are intended to hold sensitive information, such as
passwords, OAuth tokens, and ssh keys. Putting this information in a `secret` passwords, OAuth tokens, and ssh keys. Putting this information in a `secret`
is safer and more flexible than putting it verbatim in a `pod` definition or in is safer and more flexible than putting it verbatim in a `pod` definition or in
a docker image. a docker image. See [Secrets design document](../design/secrets.md) for more information.
**Table of Contents** **Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC --> <!-- BEGIN MUNGE: GENERATED_TOC -->
@ -56,7 +56,7 @@ a docker image.
Creation of secrets can be manual (done by the user) or automatic (done by Creation of secrets can be manual (done by the user) or automatic (done by
automation built into the cluster). automation built into the cluster).
A secret can be used with a pod in two ways: either as files in a volume mounted on one or more of A secret can be used with a pod in two ways: either as files in a [volume](volumes.md) mounted on one or more of
its containers, or used by kubelet when pulling images for the pod. its containers, or used by kubelet when pulling images for the pod.
To use a secret, a pod needs to reference the secret. This reference To use a secret, a pod needs to reference the secret. This reference
@ -142,6 +142,8 @@ own `volumeMounts` block, but only one `spec.volumes` is needed per secret.
You can package many files into one secret, or use many secrets, You can package many files into one secret, or use many secrets,
whichever is convenient. whichever is convenient.
See another example of creating a secret and a pod that consumes that secret in a volume [here](secrets/).
### Manually specifying an imagePullSecret ### Manually specifying an imagePullSecret
Use of imagePullSecrets is desribed in the [images documentation](images.md#specifying-imagepullsecrets-on-a-pod) Use of imagePullSecrets is desribed in the [images documentation](images.md#specifying-imagepullsecrets-on-a-pod)
### Automatic use of Manually Created Secrets ### Automatic use of Manually Created Secrets

View File

@ -22,8 +22,7 @@ certainly want the docs that go with that version.</h1>
<!-- END MUNGE: UNVERSIONED_WARNING --> <!-- END MUNGE: UNVERSIONED_WARNING -->
# Secrets example # Secrets example
Following this example, you will create a secret and a pod that consumes that secret in a volume. Following this example, you will create a [secret](../secrets.md) and a [pod](../pods.md) that consumes that secret in a [volume](../volumes.md). See [Secrets design document](../../design/secrets.md) for more information.
You can learn more about secrets [Here](../secrets.md).
## Step Zero: Prerequisites ## Step Zero: Prerequisites

View File

@ -52,7 +52,7 @@ certainly want the docs that go with that version.</h1>
Kubernetes [`Pods`](pods.md) are mortal. They are born and they die, and they Kubernetes [`Pods`](pods.md) are mortal. They are born and they die, and they
are not resurrected. [`ReplicationControllers`](replication-controller.md) in are not resurrected. [`ReplicationControllers`](replication-controller.md) in
particular create and destroy `Pods` dynamically (e.g. when scaling up or down particular create and destroy `Pods` dynamically (e.g. when scaling up or down
or when doing rolling updates). While each `Pod` gets its own IP address, even or when doing [rolling updates](kubectl/kubectl_rolling-update.md)). While each `Pod` gets its own IP address, even
those IP addresses cannot be relied upon to be stable over time. This leads to those IP addresses cannot be relied upon to be stable over time. This leads to
a problem: if some set of `Pods` (let's call them backends) provides a problem: if some set of `Pods` (let's call them backends) provides
functionality to other `Pods` (let's call them frontends) inside the Kubernetes functionality to other `Pods` (let's call them frontends) inside the Kubernetes

View File

@ -36,8 +36,8 @@ See the License for the specific language governing permissions and
limitations under the License. limitations under the License.
--> -->
# Live update example # Rolling update example
This example demonstrates the usage of Kubernetes to perform a live update on a running group of [pods](../../../docs/user-guide/pods.md). This example demonstrates the usage of Kubernetes to perform a [rolling update](../kubectl/kubectl_rolling-update.md) on a running group of [pods](../../../docs/user-guide/pods.md). See [here](../managing-deployments.md#updating-your-application-without-a-service-outage) to understand why you need a rolling update. Also check [rolling update design document](../../design/simple-rolling-update.md) for more information.
### Step Zero: Prerequisites ### Step Zero: Prerequisites
@ -64,7 +64,7 @@ I0218 15:18:31.623279 67480 proxy.go:36] Starting to serve on localhost:8001
Now visit the the [demo website](http://localhost:8001/static). You won't see anything much quite yet. Now visit the the [demo website](http://localhost:8001/static). You won't see anything much quite yet.
### Step Two: Run the replication controller ### Step Two: Run the replication controller
Now we will turn up two replicas of an image. They all serve on internal port 80. Now we will turn up two replicas of an [image](../images.md). They all serve on internal port 80.
```bash ```bash
$ kubectl create -f docs/user-guide/update-demo/nautilus-rc.yaml $ kubectl create -f docs/user-guide/update-demo/nautilus-rc.yaml

View File

@ -249,8 +249,8 @@ Kubelet to ensure that your application is operating correctly for a definition
Currently, there are three types of application health checks that you can choose from: Currently, there are three types of application health checks that you can choose from:
* HTTP Health Checks - The Kubelet will call a web hook. If it returns between 200 and 399, it is considered success, failure otherwise. * HTTP Health Checks - The Kubelet will call a web hook. If it returns between 200 and 399, it is considered success, failure otherwise. See health check examples [here](../liveness/).
* Container Exec - The Kubelet will execute a command inside your container. If it exits with status 0 it will be considered a success. * Container Exec - The Kubelet will execute a command inside your container. If it exits with status 0 it will be considered a success. See health check examples [here](../liveness/).
* TCP Socket - The Kubelet will attempt to open a socket to your container. If it can establish a connection, the container is considered healthy, if it can't it is considered a failure. * TCP Socket - The Kubelet will attempt to open a socket to your container. If it can establish a connection, the container is considered healthy, if it can't it is considered a failure.
In all cases, if the Kubelet discovers a failure, the container is restarted. In all cases, if the Kubelet discovers a failure, the container is restarted.