Merge pull request #11239 from mikedanese/user-docs-move

Move user docs to docs/user-guide/
pull/6/head
Eric Tune 2015-07-14 12:40:23 -07:00
commit 0b597aaf66
175 changed files with 314 additions and 311 deletions

View File

@ -14,7 +14,7 @@ certainly want the docs that go with that version.</h1>
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Kubernetes Documentation: releases.k8s.io/HEAD
* The [User's guide](user-guide.md) is for anyone who wants to run programs and
* The [User's guide](user-guide/user-guide.md) is for anyone who wants to run programs and
services on an existing Kubernetes cluster.
* The [Cluster Admin's guide](admin/README.md) is for anyone setting up

View File

@ -15,7 +15,7 @@ certainly want the docs that go with that version.</h1>
# Kubernetes Cluster Admin Guide
The cluster admin guide is for anyone creating or administering a Kubernetes cluster.
It assumes some familiarity with concepts in the [User Guide](../user-guide.md).
It assumes some familiarity with concepts in the [User Guide](../user-guide/user-guide.md).
## Planning a cluster
@ -63,7 +63,7 @@ project.](salt.md).
* **DNS Integration with SkyDNS** ([dns.md](dns.md)):
Resolving a DNS name directly to a Kubernetes service.
* **Logging** with [Kibana](../logging.md)
* **Logging** with [Kibana](../user-guide/logging.md)
## Multi-tenant support
@ -74,7 +74,7 @@ project.](salt.md).
## Security
* **Kubernetes Container Environment** ([docs/container-environment.md](../container-environment.md)):
* **Kubernetes Container Environment** ([docs/user-guide/container-environment.md](../user-guide/container-environment.md)):
Describes the environment for Kubelet managed containers on a Kubernetes
node.

View File

@ -20,7 +20,7 @@ cluster administrators who want to customize their cluster
or understand the details.
Most questions about accessing the cluster are covered
in [Accessing the cluster](../accessing-the-cluster.md).
in [Accessing the cluster](../user-guide/accessing-the-cluster.md).
## Ports and IPs Served On

View File

@ -86,7 +86,7 @@ We strongly recommend using this plug-in if you intend to make use of Kubernetes
### SecurityContextDeny
This plug-in will deny any pod with a [SecurityContext](../security-context.md) that defines options that were not available on the ```Container```.
This plug-in will deny any pod with a [SecurityContext](../user-guide/security-context.md) that defines options that were not available on the ```Container```.
### ResourceQuota

View File

@ -14,7 +14,7 @@ certainly want the docs that go with that version.</h1>
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Namespaces
Namespaces help different projects, teams, or customers to share a kubernetes cluster. First, they provide a scope for [Names](../identifiers.md). Second, as our access control code develops, it is expected that it will be convenient to attach authorization and other policy to namespaces.
Namespaces help different projects, teams, or customers to share a kubernetes cluster. First, they provide a scope for [Names](../user-guide/identifiers.md). Second, as our access control code develops, it is expected that it will be convenient to attach authorization and other policy to namespaces.
Use of multiple namespaces is optional. For small teams, they may not be needed.
@ -23,7 +23,7 @@ This is a placeholder document about namespace administration.
TODO: document namespace creation, ownership assignment, visibility rules,
policy creation, interaction with network.
Namespaces are still under development. For now, the best documentation is the [Namespaces Design Document](../design/namespaces.md). The user documentation can be found at [Namespaces](../../docs/namespaces.md)
Namespaces are still under development. For now, the best documentation is the [Namespaces Design Document](../design/namespaces.md). The user documentation can be found at [Namespaces](../../docs/user-guide/namespaces.md)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -34,10 +34,10 @@ certainly want the docs that go with that version.</h1>
Kubernetes approaches networking somewhat differently than Docker does by
default. There are 4 distinct networking problems to solve:
1. Highly-coupled container-to-container communications: this is solved by
[pods](../pods.md) and `localhost` communications.
[pods](../user-guide/pods.md) and `localhost` communications.
2. Pod-to-Pod communications: this is the primary focus of this document.
3. Pod-to-Service communications: this is covered by [services](../services.md).
4. External-to-Service communications: this is covered by [services](../services.md).
3. Pod-to-Service communications: this is covered by [services](../user-guide/services.md).
4. External-to-Service communications: this is covered by [services](../user-guide/services.md).
## Summary

View File

@ -36,7 +36,7 @@ certainly want the docs that go with that version.</h1>
`Node` is a worker machine in Kubernetes, previously known as `Minion`. Node
may be a VM or physical machine, depending on the cluster. Each node has
the services necessary to run [Pods](../pods.md) and be managed from the master
the services necessary to run [Pods](../user-guide/pods.md) and be managed from the master
systems. The services include docker, kubelet and network proxy. See
[The Kubernetes Node](../design/architecture.md#the-kubernetes-node) section in design
doc for more details.
@ -101,7 +101,7 @@ The information is gathered by Kubernetes from the node.
## Node Management
Unlike [Pods](../pods.md) and [Services](../services.md), a Node is not inherently
Unlike [Pods](../user-guide/pods.md) and [Services](../user-guide/services.md), a Node is not inherently
created by Kubernetes: it is either created from cloud providers like Google Compute Engine,
or from your physical or virtual machines. What this means is that when
Kubernetes creates a node, it only creates a representation for the node.

View File

@ -87,7 +87,7 @@ Kinds are grouped into three categories:
Most objects defined in the system should have an endpoint that returns the full set of resources, as well as zero or more endpoints that return subsets of the full list. Some objects may be singletons (the current user, the system defaults) and may not have lists.
In addition, all lists that return objects with labels should support label filtering (see [labels.md](labels.md), and most lists should support filtering by fields.
In addition, all lists that return objects with labels should support label filtering (see [user-guide/labels.md](user-guide/labels.md), and most lists should support filtering by fields.
Examples: PodLists, ServiceLists, NodeLists
@ -120,17 +120,17 @@ These fields are required for proper decoding of the object. They may be populat
Every object kind MUST have the following metadata in a nested object field called "metadata":
* namespace: a namespace is a DNS compatible subdomain that objects are subdivided into. The default namespace is 'default'. See [namespaces.md](namespaces.md) for more.
* name: a string that uniquely identifies this object within the current namespace (see [identifiers.md](identifiers.md)). This value is used in the path when retrieving an individual object.
* uid: a unique in time and space value (typically an RFC 4122 generated identifier, see [identifiers.md](identifiers.md)) used to distinguish between objects with the same name that have been deleted and recreated
* namespace: a namespace is a DNS compatible subdomain that objects are subdivided into. The default namespace is 'default'. See [admin/namespaces.md](admin/namespaces.md) for more.
* name: a string that uniquely identifies this object within the current namespace (see [user-guide/identifiers.md](user-guide/identifiers.md)). This value is used in the path when retrieving an individual object.
* uid: a unique in time and space value (typically an RFC 4122 generated identifier, see [user-guide/identifiers.md](user-guide/identifiers.md)) used to distinguish between objects with the same name that have been deleted and recreated
Every object SHOULD have the following metadata in a nested object field called "metadata":
* resourceVersion: a string that identifies the internal version of this object that can be used by clients to determine when objects have changed. This value MUST be treated as opaque by clients and passed unmodified back to the server. Clients should not assume that the resource version has meaning across namespaces, different kinds of resources, or different servers. (see [concurrency control](#concurrency-control-and-consistency), below, for more details)
* creationTimestamp: a string representing an RFC 3339 date of the date and time an object was created
* deletionTimestamp: a string representing an RFC 3339 date of the date and time after which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource will be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field. Once set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time.
* labels: a map of string keys and values that can be used to organize and categorize objects (see [labels.md](labels.md))
* annotations: a map of string keys and values that can be used by external tooling to store and retrieve arbitrary metadata about this object (see [annotations.md](annotations.md))
* labels: a map of string keys and values that can be used to organize and categorize objects (see [user-guide/labels.md](user-guide/labels.md))
* annotations: a map of string keys and values that can be used by external tooling to store and retrieve arbitrary metadata about this object (see [user-guide/annotations.md](user-guide/annotations.md))
Labels are intended for organizational purposes by end users (select the pods that match this label query). Annotations enable third-party automation and tooling to decorate objects with additional metadata for their own use.
@ -167,7 +167,7 @@ Status information that may be large (especially unbounded in size, such as list
#### References to related objects
References to loosely coupled sets of objects, such as [pods](pods.md) overseen by a [replication controller](replication-controller.md), are usually best referred to using a [label selector](labels.md). In order to ensure that GETs of individual objects remain bounded in time and space, these sets may be queried via separate API queries, but will not be expanded in the referring object's status.
References to loosely coupled sets of objects, such as [pods](user-guide/pods.md) overseen by a [replication controller](user-guide/replication-controller.md), are usually best referred to using a [label selector](user-guide/labels.md). In order to ensure that GETs of individual objects remain bounded in time and space, these sets may be queried via separate API queries, but will not be expanded in the referring object's status.
References to specific objects, especially specific resource versions and/or specific fields of those objects, are specified using the `ObjectReference` type. Unlike partial URLs, the ObjectReference type facilitates flexible defaulting of fields from the referring object or other contextual information.
@ -234,7 +234,7 @@ Kubernetes by convention exposes additional verbs as new root endpoints with sin
These are verbs which change the fundamental type of data returned (watch returns a stream of JSON instead of a single JSON object). Support of additional verbs is not required for all object types.
Two additional verbs `redirect` and `proxy` provide access to cluster resources as described in [accessing-the-cluster.md](accessing-the-cluster.md).
Two additional verbs `redirect` and `proxy` provide access to cluster resources as described in [user-guide/accessing-the-cluster.md](user-guide/accessing-the-cluster.md).
When resources wish to expose alternative actions that are closely coupled to a single resource, they should do so using new sub-resources. An example is allowing automated processes to update the "status" field of a Pod. The `/pods` endpoint only allows updates to "metadata" and "spec", since those reflect end-user intent. An automated process should be able to modify status for users to see by sending an updated Pod kind to the server to the "/pods/&lt;name&gt;/status" endpoint - the alternate endpoint allows different rules to be applied to the update, and access to be appropriately restricted. Likewise, some actions like "stop" or "scale" are best represented as REST sub-resources that are POSTed to. The POST action may require a simple kind to be provided if the action requires parameters, or function without a request body.
@ -324,7 +324,7 @@ labels:
## Idempotency
All compatible Kubernetes APIs MUST support "name idempotency" and respond with an HTTP status code 409 when a request is made to POST an object that has the same name as an existing object in the system. See [identifiers.md](identifiers.md) for details.
All compatible Kubernetes APIs MUST support "name idempotency" and respond with an HTTP status code 409 when a request is made to POST an object that has the same name as an existing object in the system. See [user-guide/identifiers.md](user-guide/identifiers.md) for details.
Names generated by the system may be requested using `metadata.generateName`. GenerateName indicates that the name should be made unique by the server prior to persisting it. A non-empty value for the field indicates the name will be made unique (and the name returned to the client will be different than the name passed). The value of this field will be combined with a unique suffix on the server if the Name field has not been provided. The provided value must be valid within the rules for Name, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified, and Name is not present, the server will NOT return a 409 if the generated name exists - instead, it will either return 201 Created or 504 with Reason `ServerTimeout` indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header).

View File

@ -14,7 +14,7 @@ certainly want the docs that go with that version.</h1>
<!-- END MUNGE: UNVERSIONED_WARNING -->
# The Kubernetes API
Primary system and API concepts are documented in the [User guide](user-guide.md).
Primary system and API concepts are documented in the [User guide](user-guide/user-guide.md).
Overall API conventions are described in the [API conventions doc](api-conventions.md).
@ -54,11 +54,11 @@ Changes to services are the most significant difference between v1beta3 and v1.
* The `service.spec.portalIP` property is renamed to `service.spec.clusterIP`.
* The `service.spec.createExternalLoadBalancer` property is removed. Specify `service.spec.type: "LoadBalancer"` to create an external load balancer instead.
* The `service.spec.publicIPs` property is deprecated and now called `service.spec.deprecatedPublicIPs`. This property will be removed entirely when v1beta3 is removed. The vast majority of users of this field were using it to expose services on ports on the node. Those users should specify `service.spec.type: "NodePort"` instead. Read [External Services](services.md#external-services) for more info. If this is not sufficient for your use case, please file an issue or contact @thockin.
* The `service.spec.publicIPs` property is deprecated and now called `service.spec.deprecatedPublicIPs`. This property will be removed entirely when v1beta3 is removed. The vast majority of users of this field were using it to expose services on ports on the node. Those users should specify `service.spec.type: "NodePort"` instead. Read [External Services](user-guide/services.md#external-services) for more info. If this is not sufficient for your use case, please file an issue or contact @thockin.
Some other difference between v1beta3 and v1:
* The `pod.spec.containers[*].privileged` and `pod.spec.containers[*].capabilities` properties are now nested under the `pod.spec.containers[*].securityContext` property. See [Security Contexts](security-context.md).
* The `pod.spec.containers[*].privileged` and `pod.spec.containers[*].capabilities` properties are now nested under the `pod.spec.containers[*].securityContext` property. See [Security Contexts](user-guide/security-context.md).
* The `pod.spec.host` property is renamed to `pod.spec.nodeName`.
* The `endpoints.subsets[*].addresses.IP` property is renamed to `endpoints.subsets[*].addresses.ip`.
* The `pod.status.containerStatuses[*].state.termination` and `pod.status.containerStatuses[*].lastState.termination` properties are renamed to `pod.status.containerStatuses[*].state.terminated` and `pod.status.containerStatuses[*].lastState.terminated` respectively.
@ -79,7 +79,7 @@ Some important differences between v1beta1/2 and v1beta3:
* The `labels` query parameter has been renamed to `labelSelector`.
* The `fields` query parameter has been renamed to `fieldSelector`.
* The container `entrypoint` has been renamed to `command`, and `command` has been renamed to `args`.
* Container, volume, and node resources are expressed as nested maps (e.g., `resources{cpu:1}`) rather than as individual fields, and resource values support [scaling suffixes](compute-resources.md#specifying-resource-quantities) rather than fixed scales (e.g., milli-cores).
* Container, volume, and node resources are expressed as nested maps (e.g., `resources{cpu:1}`) rather than as individual fields, and resource values support [scaling suffixes](user-guide/compute-resources.md#specifying-resource-quantities) rather than fixed scales (e.g., milli-cores).
* Restart policy is represented simply as a string (e.g., `"Always"`) rather than as a nested map (`always{}`).
* Pull policies changed from `PullAlways`, `PullNever`, and `PullIfNotPresent` to `Always`, `Never`, and `IfNotPresent`.
* The volume `source` is inlined into `volume` rather than nested.

View File

@ -68,7 +68,7 @@ To avoid running into cluster addon resource issues, when creating a cluster wit
* [FluentD with ElasticSearch Plugin](../cluster/saltbase/salt/fluentd-es/fluentd-es.yaml)
* [FluentD with GCP Plugin](../cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml)
For directions on how to detect if addon containers are hitting resource limits, see the [Troubleshooting section of Compute Resources](compute-resources.md#troubleshooting).
For directions on how to detect if addon containers are hitting resource limits, see the [Troubleshooting section of Compute Resources](user-guide/compute-resources.md#troubleshooting).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -14,7 +14,7 @@ certainly want the docs that go with that version.</h1>
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Cluster Troubleshooting
Most of the time, if you encounter problems, it is your application that is having problems. For application
problems please see the [application troubleshooting guide](application-troubleshooting.md).
problems please see the [application troubleshooting guide](user-guide/application-troubleshooting.md).
## Listing your cluster
The first thing to debug in your cluster is if your nodes are all registered correctly.

View File

@ -165,7 +165,7 @@ In the Simple Profile:
Namespaces versus userAccount vs Labels:
- `userAccount`s are intended for audit logging (both name and UID should be logged), and to define who has access to `namespace`s.
- `labels` (see [docs/labels.md](../../docs/labels.md)) should be used to distinguish pods, users, and other objects that cooperate towards a common goal but are different in some way, such as version, or responsibilities.
- `labels` (see [docs/user-guide/labels.md](../../docs/user-guide/labels.md)) should be used to distinguish pods, users, and other objects that cooperate towards a common goal but are different in some way, such as version, or responsibilities.
- `namespace`s prevent name collisions between uncoordinated groups of people, and provide a place to attach common policies for co-operating groups of people.

View File

@ -27,11 +27,11 @@ The Kubernetes node has the services necessary to run application containers and
Each node runs Docker, of course. Docker takes care of the details of downloading images and running containers.
### Kubelet
The **Kubelet** manages [pods](../pods.md) and their containers, their images, their volumes, etc.
The **Kubelet** manages [pods](../user-guide/pods.md) and their containers, their images, their volumes, etc.
### Kube-Proxy
Each node also runs a simple network proxy and load balancer (see the [services FAQ](https://github.com/GoogleCloudPlatform/kubernetes/wiki/Services-FAQ) for more details). This reflects `services` (see [the services doc](../services.md) for more details) as defined in the Kubernetes API on each node and can do simple TCP and UDP stream forwarding (round robin) across a set of backends.
Each node also runs a simple network proxy and load balancer (see the [services FAQ](https://github.com/GoogleCloudPlatform/kubernetes/wiki/Services-FAQ) for more details). This reflects `services` (see [the services doc](../user-guide/services.md) for more details) as defined in the Kubernetes API on each node and can do simple TCP and UDP stream forwarding (round robin) across a set of backends.
Service endpoints are currently found via [DNS](../admin/dns.md) or through environment variables (both [Docker-links-compatible](https://docs.docker.com/userguide/dockerlinks/) and Kubernetes {FOO}_SERVICE_HOST and {FOO}_SERVICE_PORT variables are supported). These variables resolve to ports managed by the service proxy.
@ -55,7 +55,7 @@ The scheduler binds unscheduled pods to nodes via the `/binding` API. The schedu
All other cluster-level functions are currently performed by the Controller Manager. For instance, `Endpoints` objects are created and updated by the endpoints controller, and nodes are discovered, managed, and monitored by the node controller. These could eventually be split into separate components to make them independently pluggable.
The [`replicationcontroller`](../replication-controller.md) is a mechanism that is layered on top of the simple [`pod`](../pods.md) API. We eventually plan to port it to a generic plug-in mechanism, once one is implemented.
The [`replicationcontroller`](../user-guide/replication-controller.md) is a mechanism that is layered on top of the simple [`pod`](../user-guide/pods.md) API. We eventually plan to port it to a generic plug-in mechanism, once one is implemented.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -140,7 +140,7 @@ to serve the purpose outside of GCE.
## Pod to service
The [service](../services.md) abstraction provides a way to group pods under a
The [service](../user-guide/services.md) abstraction provides a way to group pods under a
common access policy (e.g. load-balanced). The implementation of this creates a
virtual IP which clients can access and which is transparently proxied to the
pods in a Service. Each node runs a kube-proxy process which programs

View File

@ -13,7 +13,7 @@ certainly want the docs that go with that version.</h1>
<!-- END MUNGE: UNVERSIONED_WARNING -->
**Note: this is a design doc, which describes features that have not been completely implemented.
User documentation of the current state is [here](../compute-resources.md). The tracking issue for
User documentation of the current state is [here](../user-guide/compute-resources.md). The tracking issue for
implementation of this model is
[#168](https://github.com/GoogleCloudPlatform/kubernetes/issues/168). Currently, only memory and
cpu limits on containers (not pods) are supported. "memory" is in bytes and "cpu" is in
@ -163,7 +163,7 @@ The following are planned future extensions to the resource model, included here
## Usage data
Because resource usage and related metrics change continuously, need to be tracked over time (i.e., historically), can be characterized in a variety of ways, and are fairly voluminous, we will not include usage in core API objects, such as [Pods](../pods.md) and Nodes, but will provide separate APIs for accessing and managing that data. See the Appendix for possible representations of usage data, but the representation we'll use is TBD.
Because resource usage and related metrics change continuously, need to be tracked over time (i.e., historically), can be characterized in a variety of ways, and are fairly voluminous, we will not include usage in core API objects, such as [Pods](../user-guide/pods.md) and Nodes, but will provide separate APIs for accessing and managing that data. See the Appendix for possible representations of usage data, but the representation we'll use is TBD.
Singleton values for observed and predicted future usage will rapidly prove inadequate, so we will support the following structure for extended usage information:

View File

@ -16,7 +16,7 @@ certainly want the docs that go with that version.</h1>
The developer guide is for anyone wanting to either write code which directly accesses the
kubernetes API, or to contribute directly to the kubernetes project.
It assumes some familiarity with concepts in the [User Guide](user-guide.md) and the [Cluster Admin
It assumes some familiarity with concepts in the [User Guide](user-guide/user-guide.md) and the [Cluster Admin
Guide](admin/README.md).
@ -24,7 +24,7 @@ Guide](admin/README.md).
* API objects are explained at [http://kubernetes.io/third_party/swagger-ui/](http://kubernetes.io/third_party/swagger-ui/).
* **Annotations** ([annotations.md](annotations.md)): are for attaching arbitrary non-identifying metadata to objects.
* **Annotations** ([user-guide/annotations.md](user-guide/annotations.md)): are for attaching arbitrary non-identifying metadata to objects.
Programs that automate Kubernetes objects may use annotations to store small amounts of their state.
* **API Conventions** ([api-conventions.md](api-conventions.md)):

View File

@ -91,7 +91,7 @@ By default, `kubectl` will use the `kubeconfig` file generated during the cluste
For more information, please read [kubeconfig files](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubeconfig-file.md)
### Examples
See [a simple nginx example](../../examples/simple-nginx.md) to try out your new cluster.
See [a simple nginx example](../../docs/user-guide/simple-nginx.md) to try out your new cluster.
The "Guestbook" application is another popular example to get started with Kubernetes: [guestbook example](../../examples/guestbook/)

View File

@ -63,7 +63,7 @@ The script above will start (by default) a single master VM along with 4 worker
can tweak some of these parameters by editing `cluster/azure/config-default.sh`.
## Getting started with your cluster
See [a simple nginx example](../../examples/simple-nginx.md) to try out your new cluster.
See [a simple nginx example](../user-guide/simple-nginx.md) to try out your new cluster.
For more complete applications, please look in the [examples directory](../../examples/).

View File

@ -177,7 +177,7 @@ centos-minion <none> Ready
**The cluster should be running! Launch a test pod.**
You should have a functional cluster, check out [101](../../../examples/walkthrough/README.md)!
You should have a functional cluster, check out [101](../../../docs/user-guide/walkthrough/README.md)!
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -634,7 +634,7 @@ Reboot these servers to get the images PXEd and ready for running containers!
## Creating test pod
Now that the CoreOS with Kubernetes installed is up and running lets spin up some Kubernetes pods to demonstrate the system.
See [a simple nginx example](../../../examples/simple-nginx.md) to try out your new cluster.
See [a simple nginx example](../../../docs/user-guide/simple-nginx.md) to try out your new cluster.
For more complete applications, please look in the [examples directory](../../../examples/).

View File

@ -47,7 +47,7 @@ docker run --net=host -d gcr.io/google_containers/etcd:2.0.9 /usr/local/bin/etcd
docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
```
This actually runs the kubelet, which in turn runs a [pod](../pods.md) that contains the other master components.
This actually runs the kubelet, which in turn runs a [pod](../user-guide/pods.md) that contains the other master components.
### Step Three: Run the service proxy
*Note, this could be combined with master above, but it requires --privileged for iptables manipulation*

View File

@ -204,7 +204,7 @@ $ kubectl delete -f node.json
**The cluster should be running! Launch a test pod.**
You should have a functional cluster, check out [101](../../../examples/walkthrough/README.md)!
You should have a functional cluster, check out [101](../../../docs/user-guide/walkthrough/README.md)!
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -68,7 +68,7 @@ wget -q -O - https://get.k8s.io | bash
Once this command completes, you will have a master VM and four worker VMs, running as a Kubernetes cluster.
By default, some containers will already be running on your cluster. Containers like `kibana` and `elasticsearch` provide [logging](../logging.md), while `heapster` provides [monitoring](../../cluster/addons/cluster-monitoring/README.md) services.
By default, some containers will already be running on your cluster. Containers like `kibana` and `elasticsearch` provide [logging](logging.md), while `heapster` provides [monitoring](../../cluster/addons/cluster-monitoring/README.md) services.
The script run by the commands above creates a cluster with the name/prefix "kubernetes". It defines one specific cluster config, so you can't run it more than once.
@ -123,7 +123,7 @@ Once `kubectl` is in your path, you can use it to look at your cluster. E.g., ru
$ kubectl get --all-namespaces services
```
should show a set of [services](../services.md) that look something like this:
should show a set of [services](../user-guide/services.md) that look something like this:
```shell
NAMESPACE NAME LABELS SELECTOR IP(S) PORT(S)
@ -136,7 +136,7 @@ kube-system monitoring-heapster kubernetes.io/cluster-service=true,kubernete
kube-system monitoring-influxdb kubernetes.io/cluster-service=true,kubernetes.io/name=InfluxDB k8s-app=influxGrafana 10.0.210.156 8083/TCP
8086/TCP
```
Similarly, you can take a look at the set of [pods](../pods.md) that were created during cluster startup.
Similarly, you can take a look at the set of [pods](../user-guide/pods.md) that were created during cluster startup.
You can do this via the
```shell
@ -162,7 +162,7 @@ Some of the pods may take a few seconds to start up (during this time they'll sh
#### Run some examples
Then, see [a simple nginx example](../../examples/simple-nginx.md) to try out your new cluster.
Then, see [a simple nginx example](../../docs/user-guide/simple-nginx.md) to try out your new cluster.
For more complete applications, please look in the [examples directory](../../examples/). The [guestbook example](../../examples/guestbook/) is a good "getting started" walkthrough.

View File

@ -99,8 +99,8 @@ cluster/kubectl.sh get replicationcontrollers
### Running a user defined pod
Note the difference between a [container](../containers.md)
and a [pod](../pods.md). Since you only asked for the former, kubernetes will create a wrapper pod for you.
Note the difference between a [container](../user-guide/containers.md)
and a [pod](../user-guide/pods.md). Since you only asked for the former, kubernetes will create a wrapper pod for you.
However you cannot view the nginx start page on localhost. To verify that nginx is running you need to run `curl` within the docker container (try `docker exec`).
You can control the specifications of a pod via a user defined manifest, and reach nginx through your browser on the port specified therein:

View File

@ -98,7 +98,7 @@ Note: CoreOS is not supported as the master using the automated launch
scripts. The master node is always Ubuntu.
### Getting started with your cluster
See [a simple nginx example](../../../examples/simple-nginx.md) to try out your new cluster.
See [a simple nginx example](../../../docs/user-guide/simple-nginx.md) to try out your new cluster.
For more complete applications, please look in the [examples directory](../../../examples/).

View File

@ -56,7 +56,7 @@ steps that existing cluster setup scripts are making.
### Learning
1. You should be familiar with using Kubernetes already. We suggest you set
up a temporary cluster by following one of the other Getting Started Guides.
This will help you become familiar with the CLI ([kubectl](../user-guide/kubectl/kubectl.md)) and concepts ([pods](../pods.md), [services](../services.md), etc.) first.
This will help you become familiar with the CLI ([kubectl](../user-guide/kubectl/kubectl.md)) and concepts ([pods](../user-guide/pods.md), [services](../user-guide/services.md), etc.) first.
1. You should have `kubectl` installed on your desktop. This will happen as a side
effect of completing one of the other Getting Started Guides.
@ -124,7 +124,7 @@ You need to select an address range for the Pod IPs.
using `10.10.0.0/24` through `10.10.255.0/24`, respectively.
- Need to make these routable or connect with overlay.
Kubernetes also allocates an IP to each [service](../services.md). However,
Kubernetes also allocates an IP to each [service](../user-guide/services.md). However,
service IPs do not necessarily need to be routable. The kube-proxy takes care
of translating Service IPs to Pod IPs before traffic leaves the node. You do
need to Allocate a block of IPs for services. Call this
@ -255,7 +255,7 @@ to read. This guide uses `/var/lib/kube-apiserver/known_tokens.csv`.
The format for this file is described in the [authentication documentation](../admin/authentication.md).
For distributing credentials to clients, the convention in Kubernetes is to put the credentials
into a [kubeconfig file](../kubeconfig-file.md).
into a [kubeconfig file](../user-guide/kubeconfig-file.md).
The kubeconfig file for the administrator can be created as follows:
- If you have already used Kubernetes with a non-custom cluster (for example, used a Getting Started

View File

@ -25,14 +25,14 @@ to add sophisticated authorization, and to make it pluggable. See the [access c
non-identifying metadata associated with an object, such as provenance information. Not indexed.
**Image**
: A [Docker Image](https://docs.docker.com/userguide/dockerimages/). See [images](images.md).
: A [Docker Image](https://docs.docker.com/userguide/dockerimages/). See [images](user-guide/images.md).
**Label**
: A key/value pair conveying user-defined identifying attributes of an object, and used to form sets of related objects, such as
pods which are replicas in a load-balanced service. Not intended to hold large or non-human-readable data. See [labels](labels.md).
pods which are replicas in a load-balanced service. Not intended to hold large or non-human-readable data. See [labels](user-guide/labels.md).
**Name**
: A user-provided name for an object. See [identifiers](identifiers.md).
: A user-provided name for an object. See [identifiers](user-guide/identifiers.md).
**Namespace**
: A namespace is like a prefix to the name of an object. You can configure your client to use a particular namespace,
@ -40,33 +40,33 @@ so you do not have to type it all the time. Namespaces allow multiple projects t
**Pod**
: A collection of containers which will be scheduled onto the same node, which share and an IP and port space, and which
can be created/destroyed together. See [pods](pods.md).
can be created/destroyed together. See [pods](user-guide/pods.md).
**Replication Controller**
: A _replication controller_ ensures that a specified number of pod "replicas" are running at any one time. Both allows
for easy scaling of replicated systems, and handles restarting of a Pod when the machine it is on reboots or otherwise fails.
**Resource**
: CPU, memory, and other things that a pod can request. See [compute resources](compute-resources.md).
: CPU, memory, and other things that a pod can request. See [compute resources](user-guide/compute-resources.md).
**Secret**
: An object containing sensitive information, such as authentication tokens, which can be made available to containers upon request. See [secrets](secrets.md).
: An object containing sensitive information, such as authentication tokens, which can be made available to containers upon request. See [secrets](user-guide/secrets.md).
**Selector**
: An expression that matches Labels. Can identify related objects, such as pods which are replicas in a load-balanced
service. See [labels](labels.md).
service. See [labels](user-guide/labels.md).
**Service**
: A load-balanced set of `pods` which can be accessed via a single stable IP address. See [services](services.md).
: A load-balanced set of `pods` which can be accessed via a single stable IP address. See [services](user-guide/services.md).
**UID**
: An identifier on all Kubernetes objects that is set by the Kubernetes API server. Can be used to distinguish between historical
occurrences of same-Name objects. See [identifiers](identifiers.md).
occurrences of same-Name objects. See [identifiers](user-guide/identifiers.md).
**Volume**
: A directory, possibly with some data in it, which is accessible to a Container as part of its filesystem. Kubernetes
Volumes build upon [Docker Volumes](https://docs.docker.com/userguide/dockervolumes/), adding provisioning of the Volume
directory and/or device. See [volumes](volumes.md).
directory and/or device. See [volumes](user-guide/volumes.md).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -35,7 +35,7 @@ done automatically based on statistical analysis and thresholds.
* This proposal is for horizontal scaling only. Vertical scaling will be handled in [issue 2072](https://github.com/GoogleCloudPlatform/kubernetes/issues/2072)
* `ReplicationControllers` will not know about the auto-scaler, they are the target of the auto-scaler. The `ReplicationController` responsibilities are
constrained to only ensuring that the desired number of pods are operational per the [Replication Controller Design](../replication-controller.md#responsibilities-of-the-replication-controller)
constrained to only ensuring that the desired number of pods are operational per the [Replication Controller Design](../user-guide/replication-controller.md#responsibilities-of-the-replication-controller)
* Auto-scalers will be loosely coupled with data gathering components in order to allow a wide variety of input sources
* Auto-scalable resources will support a scale verb ([1629](https://github.com/GoogleCloudPlatform/kubernetes/issues/1629))
such that the auto-scaler does not directly manipulate the underlying resource.
@ -56,7 +56,7 @@ applications will expose one or more network endpoints for clients to connect to
balanced or situated behind a proxy - the data from those proxies and load balancers can be used to estimate client to
server traffic for applications. This is the primary, but not sole, source of data for making decisions.
Within Kubernetes a [kube proxy](../services.md#ips-and-vips)
Within Kubernetes a [kube proxy](../user-guide/services.md#ips-and-vips)
running on each node directs service requests to the underlying implementation.
While the proxy provides internal inter-pod connections, there will be L3 and L7 proxies and load balancers that manage
@ -239,7 +239,7 @@ or down as appropriate. In the future this may be more configurable.
### Interactions with a deployment
In a deployment it is likely that multiple replication controllers must be monitored. For instance, in a [rolling deployment](../replication-controller.md#rolling-updates)
In a deployment it is likely that multiple replication controllers must be monitored. For instance, in a [rolling deployment](../user-guide/replication-controller.md#rolling-updates)
there will be multiple replication controllers, with one scaling up and another scaling down. This means that an
auto-scaler must be aware of the entire set of capacity that backs a service so it does not fight with the deployer. `AutoScalerSpec.MonitorSelector`
is what provides this ability. By using a selector that spans the entire service the auto-scaler can monitor capacity

View File

@ -37,10 +37,10 @@ When you create a pod, you do not need to specify a service account. It is
automatically assigned the `default` service account of the same namespace. If
you get the raw json or yaml for a pod you have created (e.g. `kubectl get
pods/podname -o yaml`), you can see the `spec.serviceAccount` field has been
[automatically set](working-with-resources.md#resources-are-automatically-modified).
[automatically set](user-guide/working-with-resources.md#resources-are-automatically-modified).
You can access the API using a proxy or with a client library, as described in
[Accessing the Cluster](accessing-the-cluster.md#accessing-the-api-from-a-pod).
[Accessing the Cluster](user-guide/accessing-the-cluster.md#accessing-the-api-from-a-pod).
## Using Multiple Service Accounts

View File

@ -14,7 +14,7 @@ certainly want the docs that go with that version.</h1>
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Troubleshooting
Sometimes things go wrong. This guide is aimed at making them right. It has two sections:
* [Troubleshooting your application](application-troubleshooting.md) - Useful for users who are deploying code into Kubernetes and wondering why it is not working.
* [Troubleshooting your application](user-guide/application-troubleshooting.md) - Useful for users who are deploying code into Kubernetes and wondering why it is not working.
* [Troubleshooting your cluster](cluster-troubleshooting.md) - Useful for cluster administrators and people whose Kubernetes cluster is unhappy.

View File

@ -42,7 +42,7 @@ kubernetes CLI, `kubectl`.
To access a cluster, you need to know the location of the cluster and have credentials
to access it. Typically, this is automatically set-up when you work through
though a [Getting started guide](getting-started-guides/README.md),
though a [Getting started guide](../getting-started-guides/README.md),
or someone else setup the cluster and provided you with credentials and a location.
Check the location and credentials that kubectl knows about with this command:
@ -50,8 +50,8 @@ Check the location and credentials that kubectl knows about with this command:
kubectl config view
```
Many of the [examples](../examples/) provide an introduction to using
kubectl and complete documentation is found in the [kubectl manual](user-guide/kubectl/kubectl.md).
Many of the [examples](../../examples/) provide an introduction to using
kubectl and complete documentation is found in the [kubectl manual](kubectl/kubectl.md).
### Directly accessing the REST API
Kubectl handles locating and authenticating to the apiserver.
@ -76,7 +76,7 @@ Run it like this:
```
kubectl proxy --port=8080 &
```
See [kubectl proxy](user-guide/kubectl/kubectl_proxy.md) for more details.
See [kubectl proxy](kubectl/kubectl_proxy.md) for more details.
Then you can explore the API with curl, wget, or a browser, like so:
```
@ -110,13 +110,13 @@ certificate.
On some clusters, the apiserver does not require authentication; it may serve
on localhost, or be protected by a firewall. There is not a standard
for this. [Configuring Access to the API](admin/accessing-the-api.md)
for this. [Configuring Access to the API](../admin/accessing-the-api.md)
describes how a cluster admin can configure this. Such approaches may conflict
with future high-availability support.
### Programmatic access to the API
There are [client libraries](client-libraries.md) for accessing the API
There are [client libraries](../client-libraries.md) for accessing the API
from several languages. The Kubernetes project-supported
[Go](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/pkg/client)
client library can use the same [kubeconfig file](kubeconfig-file.md)
@ -134,7 +134,7 @@ the `kubernetes` DNS name, which resolves to a Service IP which in turn
will be routed to an apiserver.
The recommended way to authenticate to the apiserver is with a
[service account](service-accounts.md) credential. By default, a pod
[service account](../service-accounts.md) credential. By default, a pod
is associated with a service account, and a credential (token) for that
service account is placed into the filesystem tree of each container in that pod,
at `/var/run/secrets/kubernetes.io/serviceaccount/token`.
@ -144,7 +144,7 @@ From within a pod the recommended ways to connect to API are:
process within a container. This proxies the
kubernetes API to the localhost interface of the pod, so that other processes
in any container of the pod can access it. See this [example of using kubectl proxy
in a pod](../examples/kubectl-container/).
in a pod](../../examples/kubectl-container/).
- use the Go client library, and create a client using the `client.NewInCluster()` factory.
This handles locating and authenticating to the apiserver.
In each case, the credentials of the pod are used to communicate securely with the apiserver.
@ -153,7 +153,7 @@ In each case, the credentials of the pod are used to communicate securely with t
## Accessing services running on the cluster
The previous section was about connecting the Kubernetes API server. This section is about
connecting to other services running on Kubernetes cluster. In kubernetes, the
[nodes](admin/node.md), [pods](pods.md) and [services](services.md) all have
[nodes](../admin/node.md), [pods](pods.md) and [services](services.md) all have
their own IPs. In many cases, the node IPs, pod IPs, and some service IPs on a cluster will not be
routable, so they will not be reachable from a machine outside the cluster,
such as your desktop machine.
@ -163,7 +163,7 @@ You have several options for connecting to nodes, pods and services from outside
- Access services through public IPs.
- Use a service with type `NodePort` or `LoadBalancer` to make the service reachable outside
the cluster. See the [services](services.md) and
[kubectl expose](user-guide/kubectl/kubectl_expose.md) documentation.
[kubectl expose](kubectl/kubectl_expose.md) documentation.
- Depending on your cluster environment, this may just expose the service to your corporate network,
or it may expose it to the internet. Think about whether the service being exposed is secure.
Does it do its own authentication?
@ -179,7 +179,7 @@ You have several options for connecting to nodes, pods and services from outside
- Only works for HTTP/HTTPS.
- Described [here](#discovering-builtin-services).
- Access from a node or pod in the cluster.
- Run a pod, and then connect to a shell in it using [kubectl exec](user-guide/kubectl/kubectl_exec.md).
- Run a pod, and then connect to a shell in it using [kubectl exec](kubectl/kubectl_exec.md).
Connect to other nodes, pods, and services from that shell.
- Some clusters may allow you to ssh to a node in the cluster. From there you may be able to
access cluster services. This is a non-standard method, and will work on some clusters but
@ -279,5 +279,5 @@ will typically ensure that the latter types are setup correctly.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/accessing-the-cluster.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/accessing-the-cluster.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -40,5 +40,5 @@ Yes, this information could be stored in an external database or directory, but
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/annotations.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/annotations.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -16,7 +16,7 @@ certainly want the docs that go with that version.</h1>
This guide is to help users debug applications that are deployed into Kubernetes and not behaving correctly.
This is *not* a guide for people who want to debug their cluster. For that you should check out
[this guide](cluster-troubleshooting.md)
[this guide](../cluster-troubleshooting.md)
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
@ -167,5 +167,5 @@ check:
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/application-troubleshooting.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/application-troubleshooting.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -121,7 +121,7 @@ To determine if a container cannot be scheduled or is being killed due to resour
The resource usage of a pod is reported as part of the Pod status.
If [optional monitoring](../cluster/addons/cluster-monitoring/README.md) is configured for your cluster,
If [optional monitoring](../../cluster/addons/cluster-monitoring/README.md) is configured for your cluster,
then pod resource usage can be retrieved from the monitoring system.
## Troubleshooting
@ -147,7 +147,7 @@ Here are some example command lines that extract just the necessary information:
- `kubectl get nodes -o yaml | grep '\sname\|cpu\|memory'`
- `kubectl get nodes -o json | jq '.items[] | {name: .metadata.name, cap: .status.capacity}'`
The [resource quota](admin/resource-quota.md) feature can be configured
The [resource quota](../admin/resource-quota.md) feature can be configured
to limit the total amount of resources that can be consumed. If used in conjunction
with namespaces, it can prevent one team from hogging all the resources.
@ -209,7 +209,7 @@ such as [EmptyDir volumes](volumes.md#emptydir).
The current system only supports container limits for CPU and Memory.
It is planned to add new resource types, including a node disk space
resource, and a framework for adding custom [resource types](design/resources.md#resource-types).
resource, and a framework for adding custom [resource types](../design/resources.md#resource-types).
The current system does not facilitate overcommitment of resources because resources reserved
with container limits are assured. It is planned to support multiple levels of [Quality of
@ -223,5 +223,5 @@ across providers and platforms.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/compute-resources.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/compute-resources.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -35,7 +35,7 @@ In the declarative style, all configuration is stored in YAML or JSON configurat
## Launching a container using a configuration file
Kubernetes executes containers in [*Pods*](../../docs/pods.md). A pod containing a simple Hello World container can be specified in YAML as follows:
Kubernetes executes containers in [*Pods*](pods.md). A pod containing a simple Hello World container can be specified in YAML as follows:
```yaml
apiVersion: v1
@ -53,7 +53,7 @@ The value of `metadata.name`, `hello-world`, will be the name of the pod resourc
`restartPolicy: Never` indicates that we just want to run the container once and then terminate the pod.
The [`command`](../../docs/containers.md#containers-and-commands) overrides the Docker containers `Entrypoint`. Command arguments (corresponding to Dockers `Cmd`) may be specified using `args`, as follows:
The [`command`](containers.md#containers-and-commands) overrides the Docker containers `Entrypoint`. Command arguments (corresponding to Dockers `Cmd`) may be specified using `args`, as follows:
```yaml
command: ["/bin/echo"]

View File

@ -78,7 +78,7 @@ You can read more about [how we achieve this](../admin/networking.md#how-to-achi
So we have pods running nginx in a flat, cluster wide, address space. In theory, you could talk to these pods directly, but what happens when a node dies? The pods die with it, and the replication controller will create new ones, with different ips. This is the problem a Service solves.
A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service ([why not use round robin dns?](../services.md#why-not-use-round-robin-dns)).
A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service ([why not use round robin dns?](services.md#why-not-use-round-robin-dns)).
You can create a Service for your 2 nginx replicas with the following yaml:
```yaml
@ -106,7 +106,7 @@ $ kubectl get ep
NAME ENDPOINTS
nginxsvc 10.245.0.14:80,10.245.0.15:80
```
You should now be able to curl the nginx Service on `10.0.208.159:80` from any node in your cluster. Note that the Service ip is completely virtual, it never hits the wire, if youre curious about how this works you can read more about the [service proxy](../services.md#virtual-ips-and-service-proxies).
You should now be able to curl the nginx Service on `10.0.208.159:80` from any node in your cluster. Note that the Service ip is completely virtual, it never hits the wire, if youre curious about how this works you can read more about the [service proxy](services.md#virtual-ips-and-service-proxies).
## Accessing the Service from other pods in the cluster

View File

@ -12,8 +12,8 @@ certainly want the docs that go with that version.</h1>
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
#Connecting to applications: kubectl port-forward
kubectl port-forward forwards connections to a local port to a port on a pod. Its man page is available [here](kubectl/kubectl_port-forward.md). Compared to [kubectl proxy](../../docs/accessing-the-cluster.md#using-kubectl-proxy), `kubectl port-forward` is more generic as it can forward TCP traffic while `kubectl proxy` can only forward HTTP traffic. This guide demonstrates how to use `kubectl port-forward` to connect to a Redis database, which may be useful for database debugging.
#Connecting to applications: kubectl port-forward
kubectl port-forward forwards connections to a local port to a port on a pod. Its man page is available [here](kubectl/kubectl_port-forward.md). Compared to [kubectl proxy](accessing-the-cluster.md#using-kubectl-proxy), `kubectl port-forward` is more generic as it can forward TCP traffic while `kubectl proxy` can only forward HTTP traffic. This guide demonstrates how to use `kubectl port-forward` to connect to a Redis database, which may be useful for database debugging.
## Creating a Redis master

View File

@ -12,8 +12,8 @@ certainly want the docs that go with that version.</h1>
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
#Connecting to applications: kubectl proxy and apiserver proxy
You have seen the [basics](../../docs/accessing-the-cluster.md) about `kubectl proxy` and `apiserver proxy`. This guide shows how to use them together to access a service([kube-ui](../../docs/ui.md)) running on the Kubernetes cluster from your workstation.
#Connecting to applications: kubectl proxy and apiserver proxy
You have seen the [basics](accessing-the-cluster.md) about `kubectl proxy` and `apiserver proxy`. This guide shows how to use them together to access a service([kube-ui](ui.md)) running on the Kubernetes cluster from your workstation.
##Getting the apiserver proxy URL of kube-ui
@ -22,7 +22,7 @@ kube-ui is deployed as a cluster add-on. To find its apiserver proxy URL,
$ kubectl cluster-info | grep "KubeUI"
KubeUI is running at https://173.255.119.104/api/v1/proxy/namespaces/kube-system/services/kube-ui
```
if this command does not find the URL, try the steps [here](../../docs/ui.md#accessing-the-ui).
if this command does not find the URL, try the steps [here](ui.md#accessing-the-ui).
##Connecting to the kube-ui service from your local workstation

View File

@ -119,5 +119,5 @@ Hook handlers are the way that hooks are surfaced to containers.  Containers ca
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/container-environment.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/container-environment.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -104,5 +104,5 @@ The relationship between Docker's capabilities and [Linux capabilities](http://m
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/containers.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/containers.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -504,5 +504,5 @@ Contact us on
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/debugging-services.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/debugging-services.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -18,7 +18,7 @@ You previously read about how to quickly deploy a simple replicated application
## Launching a set of replicas using a configuration file
Kubernetes creates and manages sets of replicated containers (actually, replicated [Pods](../../docs/pods.md)) using [*Replication Controllers*](../../docs/replication-controller.md).
Kubernetes creates and manages sets of replicated containers (actually, replicated [Pods](pods.md)) using [*Replication Controllers*](replication-controller.md).
A replication controller simply ensures that a specified number of pod "replicas" are running at any one time. If there are too many, it will kill some. If there are too few, it will start more. Its analogous to Google Compute Engines [Instance Group Manager](https://cloud.google.com/compute/docs/instance-groups/manager/) or AWSs [Auto-scaling Group](http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScalingGroup.html) (with no scaling policies).
@ -82,7 +82,7 @@ If you try to delete the pods before deleting the replication controller, it wil
## Labels
Kubernetes uses user-defined key-value attributes called [*labels*](../../docs/labels.md) to categorize and identify sets of resources, such as pods and replication controllers. The example above specified a single label in the pod template, with key `app` and value `nginx`. All pods created carry that label, which can be viewed using `-L`:
Kubernetes uses user-defined key-value attributes called [*labels*](labels.md) to categorize and identify sets of resources, such as pods and replication controllers. The example above specified a single label in the pod template, with key `app` and value `nginx`. All pods created carry that label, which can be viewed using `-L`:
```bash
$ kubectl get pods -L app
NAME READY STATUS RESTARTS AGE APP
@ -97,7 +97,7 @@ CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS APP
my-nginx nginx nginx app=nginx 2 nginx
```
More importantly, the pod templates labels are used to create a [`selector`](../../docs/labels.md#label-selectors) that will match pods carrying those labels. You can see this field by requesting it using the [Go template output format of `kubectl get`](kubectl/kubectl_get.md):
More importantly, the pod templates labels are used to create a [`selector`](labels.md#label-selectors) that will match pods carrying those labels. You can see this field by requesting it using the [Go template output format of `kubectl get`](kubectl/kubectl_get.md):
```bash
$ kubectl get rc my-nginx -o template --template="{{.spec.selector}}"
map[app:nginx]

View File

@ -84,10 +84,10 @@ spec:
```
Some more thorough examples:
* [environment variables](../examples/environment-guide/)
* [downward API](../examples/downward-api/)
* [environment variables](environment-guide/)
* [downward API](downward-api/)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/downward-api.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/downward-api.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -21,7 +21,7 @@ namespace using the [downward API](https://github.com/GoogleCloudPlatform/kubern
This example assumes you have a Kubernetes cluster installed and running, and that you have
installed the ```kubectl``` command line tool somewhere in your path. Please see the [getting
started](../../docs/getting-started-guides/) for installation instructions for your platform.
started](../../../docs/getting-started-guides/) for installation instructions for your platform.
## Step One: Create the pod
@ -48,5 +48,5 @@ $ kubectl logs dapi-test-pod | grep POD_
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/downward-api/README.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/downward-api/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -21,7 +21,7 @@ environment information about itself, and a backend pod that it has
accessed through the service. The goal is to illuminate the
environment metadata available to running containers inside the
Kubernetes cluster. The documentation for the kubernetes environment
is [here](../../docs/container-environment.md).
is [here](../../../docs/user-guide/container-environment.md).
![Diagram](diagram.png)
@ -30,7 +30,7 @@ Prerequisites
This example assumes that you have a Kubernetes cluster installed and
running, and that you have installed the `kubectl` command line tool
somewhere in your path. Please see the [getting
started](../../docs/getting-started-guides/) for installation instructions
started](../../../docs/getting-started-guides/) for installation instructions
for your platform.
Optional: Build your own containers
@ -81,8 +81,8 @@ Backend Namespace: default
```
First the frontend pod's information is printed. The pod name and
[namespace](../../docs/design/namespaces.md) are retreived from the
[Downward API](../../docs/downward-api.md). Next, `USER_VAR` is the name of
[namespace](../../../docs/design/namespaces.md) are retreived from the
[Downward API](../../../docs/user-guide/downward-api.md). Next, `USER_VAR` is the name of
an environment variable set in the [pod
definition](show-rc.yaml). Then, the dynamic kubernetes environment
variables are scanned and printed. These are used to find the backend
@ -104,5 +104,5 @@ Cleanup
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/environment-guide/README.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/environment-guide/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -35,5 +35,5 @@ specified `image:` with the one that you built.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/environment-guide/containers/README.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/environment-guide/containers/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

View File

@ -16,7 +16,7 @@ certainly want the docs that go with that version.</h1>
Developers can use `kubectl exec` to run commands in a container. This guide demonstrates two use cases.
##Using kubectl exec to check the environment variables of a container
Kubernetes exposes [services](../../docs/services.md#environment-variables) through environment variables. It is convenient to check these environment variables using `kubectl exec`.
Kubernetes exposes [services](services.md#environment-variables) through environment variables. It is convenient to check these environment variables using `kubectl exec`.
We first create a pod and a service,

View File

@ -18,12 +18,12 @@ All objects in the Kubernetes REST API are unambiguously identified by a Name an
For non-unique user-provided attributes, Kubernetes provides [labels](labels.md) and [annotations](annotations.md).
## Names
Names are generally client-provided. Only one object of a given kind can have a given name at a time (i.e., they are spatially unique). But if you delete an object, you can make a new object with the same name. Names are the used to refer to an object in a resource URL, such as `/api/v1/pods/some-name`. By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, `-`, and `.`, but certain resources have more specific restrictions. See the [identifiers design doc](design/identifiers.md) for the precise syntax rules for names.
Names are generally client-provided. Only one object of a given kind can have a given name at a time (i.e., they are spatially unique). But if you delete an object, you can make a new object with the same name. Names are the used to refer to an object in a resource URL, such as `/api/v1/pods/some-name`. By convention, the names of Kubernetes resources should be up to maximum length of 253 characters and consist of lower case alphanumeric characters, `-`, and `.`, but certain resources have more specific restrictions. See the [identifiers design doc](../design/identifiers.md) for the precise syntax rules for names.
## UIDs
UID are generated by Kubernetes. Every object created over the whole lifetime of a Kubernetes cluster has a distinct UID (i.e., they are spatially and temporally unique).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/identifiers.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/identifiers.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -215,7 +215,7 @@ spec:
```
This needs to be done for each pod that is using a private registry.
However, setting of this field can be automated by setting the imagePullSecrets
in a [serviceAccount](service-accounts.md) resource.
in a [serviceAccount](../service-accounts.md) resource.
Currently, all pods will potentially have read access to any images which were
pulled using imagePullSecrets. That is, imagePullSecrets does *NOT* protect your
@ -251,5 +251,5 @@ common use cases and suggested solutions.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/images.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/images.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

Before

Width:  |  Height:  |  Size: 522 KiB

After

Width:  |  Height:  |  Size: 522 KiB

View File

Before

Width:  |  Height:  |  Size: 70 KiB

After

Width:  |  Height:  |  Size: 70 KiB

View File

Before

Width:  |  Height:  |  Size: 71 KiB

After

Width:  |  Height:  |  Size: 71 KiB

View File

Before

Width:  |  Height:  |  Size: 52 KiB

After

Width:  |  Height:  |  Size: 52 KiB

View File

Before

Width:  |  Height:  |  Size: 67 KiB

After

Width:  |  Height:  |  Size: 67 KiB

View File

Before

Width:  |  Height:  |  Size: 35 KiB

After

Width:  |  Height:  |  Size: 35 KiB

View File

Before

Width:  |  Height:  |  Size: 76 KiB

After

Width:  |  Height:  |  Size: 76 KiB

View File

Before

Width:  |  Height:  |  Size: 81 KiB

After

Width:  |  Height:  |  Size: 81 KiB

View File

@ -97,7 +97,7 @@ The rules for loading and merging the kubeconfig files are straightforward, but
## Manipulation of kubeconfig via `kubectl config <subcommand>`
In order to more easily manipulate kubeconfig files, there are a series of subcommands to `kubectl config` to help.
See [user-guide/kubectl/kubectl_config.md](user-guide/kubectl/kubectl_config.md) for help.
See [kubectl/kubectl_config.md](kubectl/kubectl_config.md) for help.
### Example
```
@ -164,5 +164,5 @@ $kubectl config use-context federal-context
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubeconfig-file.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/kubeconfig-file.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -121,5 +121,5 @@ Concerning API: we may extend such filtering to DELETE operations in the future.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/labels.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/labels.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -43,7 +43,7 @@ For a detailed description of the Kubernetes resource model, see [Resources](htt
Step 0: Prerequisites
-----------------------------------------
This example requires a running Kubernetes cluster. See the [Getting Started guides](../../docs/getting-started-guides/) for how to get started.
This example requires a running Kubernetes cluster. See the [Getting Started guides](../../../docs/getting-started-guides/) for how to get started.
Change to the `<kubernetes>/examples/limitrange` directory if you're not already there.
@ -178,5 +178,5 @@ amount of resource a pod consumes on a node.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/limitrange/README.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/limitrange/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -84,5 +84,5 @@ Sat, 27 Jun 2015 13:44:44 +0200 Sat, 27 Jun 2015 13:44:44 +0200 1 {kube
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/liveness/README.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/liveness/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -13,7 +13,7 @@ certainly want the docs that go with that version.</h1>
<!-- END MUNGE: UNVERSIONED_WARNING -->
# Elasticsearch/Kibana Logging Demonstration
This directory contains two [pod](../../docs/pods.md) specifications which can be used as synthetic
This directory contains two [pod](../../../docs/user-guide/pods.md) specifications which can be used as synthetic
logging sources. The pod specification in [synthetic_0_25lps.yaml](synthetic_0_25lps.yaml)
describes a pod that just emits a log message once every 4 seconds. The pod specification in
[synthetic_10lps.yaml](synthetic_10lps.yaml)
@ -21,12 +21,12 @@ describes a pod that just emits 10 log lines per second.
To observe the ingested log lines when using Google Cloud Logging please see the getting
started instructions
at [Cluster Level Logging to Google Cloud Logging](../../docs/getting-started-guides/logging.md).
at [Cluster Level Logging to Google Cloud Logging](../../../docs/getting-started-guides/logging.md).
To observe the ingested log lines when using Elasticsearch and Kibana please see the getting
started instructions
at [Cluster Level Logging with Elasticsearch and Kibana](../../docs/getting-started-guides/logging-elasticsearch.md).
at [Cluster Level Logging with Elasticsearch and Kibana](../../../docs/getting-started-guides/logging-elasticsearch.md).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/logging-demo/README.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/logging-demo/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

Before

Width:  |  Height:  |  Size: 87 KiB

After

Width:  |  Height:  |  Size: 87 KiB

View File

@ -15,12 +15,12 @@ certainly want the docs that go with that version.</h1>
# Logging
## Logging by Kubernetes Components
Kubernetes components, such as kubelet and apiserver, use the [glog](https://godoc.org/github.com/golang/glog) logging library. Developer conventions for logging severity are described in [devel/logging.md](devel/logging.md).
Kubernetes components, such as kubelet and apiserver, use the [glog](https://godoc.org/github.com/golang/glog) logging library. Developer conventions for logging severity are described in [docs/devel/logging.md](../devel/logging.md).
## Examining the logs of running containers
The logs of a running container may be fetched using the command `kubectl logs`. For example, given
this pod specification which has a container which writes out some text to standard
output every second [counter-pod.yaml](../examples/blog-logging/counter-pod.yaml):
output every second [counter-pod.yaml](../../examples/blog-logging/counter-pod.yaml):
```
apiVersion: v1
kind: Pod
@ -70,19 +70,19 @@ $ kubectl logs kube-dns-v3-7r1l9 etcd
```
## Cluster level logging to Google Cloud Logging
The getting started guide [Cluster Level Logging to Google Cloud Logging](getting-started-guides/logging.md)
The getting started guide [Cluster Level Logging to Google Cloud Logging](../getting-started-guides/logging.md)
explains how container logs are ingested into [Google Cloud Logging](https://cloud.google.com/logging/docs/)
and shows how to query the ingested logs.
## Cluster level logging with Elasticsearch and Kibana
The getting started guide [Cluster Level Logging with Elasticsearch and Kibana](getting-started-guides/logging-elasticsearch.md)
The getting started guide [Cluster Level Logging with Elasticsearch and Kibana](../getting-started-guides/logging-elasticsearch.md)
describes how to ingest cluster level logs into Elasticsearch and view them using Kibana.
## Ingesting Application Log Files
Cluster level logging only collects the standard output and standard error output of the applications
running in containers. The guide [Collecting log files within containers with Fluentd](../contrib/logging/fluentd-sidecar-gcp/README.md) explains how the log files of applications can also be ingested into Google Cloud logging.
running in containers. The guide [Collecting log files within containers with Fluentd](../../contrib/logging/fluentd-sidecar-gcp/README.md) explains how the log files of applications can also be ingested into Google Cloud logging.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/logging.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/logging.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -351,11 +351,11 @@ Update succeeded. Deleting my-nginx
my-nginx-v4
```
You can also run the [update demo](../../examples/update-demo/) to see a visual representation of the rolling update process.
You can also run the [update demo](update-demo/) to see a visual representation of the rolling update process.
## In-place updates of resources
Sometimes its necessary to make narrow, non-disruptive updates to resources youve created. For instance, you might want to add an [annotation](../../docs/annotations.md) with a description of your object. Thats easiest to do with `kubectl patch`:
Sometimes its necessary to make narrow, non-disruptive updates to resources youve created. For instance, you might want to add an [annotation](annotations.md) with a description of your object. Thats easiest to do with `kubectl patch`:
```bash
$ kubectl patch rc my-nginx-v4 -p '{"metadata": {"annotations": {"description": "my frontend running nginx"}}}'
my-nginx-v4

View File

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

View File

@ -18,7 +18,7 @@ Understanding how an application behaves when deployed is crucial to scaling the
### Overview
Heapster is a cluster-wide aggregator of monitoring and event data. It currently supports Kubernetes natively and works on all Kubernetes setups. Heapster runs as a pod in the cluster, similar to how any Kubernetes application would run. The Heapster pod discovers all nodes in the cluster and queries usage information from the nodes [Kubelet](../DESIGN.md#kubelet)s, the on-machine Kubernetes agent. The Kubelet itself fetches the data from [cAdvisor](https://github.com/google/cadvisor). Heapster groups the information by pod along with the relevant labels. This data is then pushed to a configurable backend for storage and visualization. Currently supported backends include [InfluxDB](http://influxdb.com/) (with [Grafana](http://grafana.org/) for visualization) and [Google Cloud Monitoring](https://cloud.google.com/monitoring/). The overall architecture of the service can be seen below:
Heapster is a cluster-wide aggregator of monitoring and event data. It currently supports Kubernetes natively and works on all Kubernetes setups. Heapster runs as a pod in the cluster, similar to how any Kubernetes application would run. The Heapster pod discovers all nodes in the cluster and queries usage information from the nodes [Kubelet](../../DESIGN.md#kubelet)s, the on-machine Kubernetes agent. The Kubelet itself fetches the data from [cAdvisor](https://github.com/google/cadvisor). Heapster groups the information by pod along with the relevant labels. This data is then pushed to a configurable backend for storage and visualization. Currently supported backends include [InfluxDB](http://influxdb.com/) (with [Grafana](http://grafana.org/) for visualization) and [Google Cloud Monitoring](https://cloud.google.com/monitoring/). The overall architecture of the service can be seen below:
![overall monitoring architecture](monitoring-architecture.png)
@ -30,7 +30,7 @@ cAdvisor is an open source container resource usage and performance analysis age
On most Kubernetes clusters, cAdvisor exposes a simple UI for on-machine containers on port 4194. Here is a snapshot of part of cAdvisors UI that shows the overall machine usage:
![cAdvisor](cadvisor.png)
![cAdvisor](../cadvisor.png)
### Kubelet
@ -61,7 +61,7 @@ Here is a video showing how to setup and run a Google Cloud Monitoring backed He
Here is a snapshot of the a Google Cloud Monitoring dashboard showing cluster-wide resource usage.
![Google Cloud Monitoring dashboard](gcm.png)
![Google Cloud Monitoring dashboard](../gcm.png)
## Try it out!
Now that youve learned a bit about Heapster, feel free to try it out on your own clusters! The [Heapster repository](https://github.com/GoogleCloudPlatform/heapster) is available on GitHub. It contains detailed instructions to setup Heapster and its storage backends. Heapster runs by default on most Kubernetes clusters, so you may already have it! Feedback is always welcome. Please let us know if you run into any issues. Heapster and Kubernetes developers hang out in the [#google-containers](http://webchat.freenode.net/?channels=google-containers) IRC channel on freenode.net. You can also reach us on the [google-containers Google Groups mailing list](https://groups.google.com/forum/#!forum/google-containers).
@ -72,5 +72,5 @@ Now that youve learned a bit about Heapster, feel free to try it out on your
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/monitoring.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/monitoring.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -18,9 +18,9 @@ Namespaces help different projects, teams, or customers to share a kubernetes cl
Use of multiple namespaces is optional. For small teams, they may not be needed.
Namespaces are still under development. For now, the best documentation is the [Namespaces Design Document](design/namespaces.md).
Namespaces are still under development. For now, the best documentation is the [Namespaces Design Document](../design/namespaces.md).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/namespaces.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/namespaces.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -28,7 +28,7 @@ Then, to add a label to the node you've chosen, run `kubectl label nodes <node-n
If this fails with an "invalid command" error, you're likely using an older version of kubectl that doesn't have the `label` command. In that case, see the [previous version](https://github.com/GoogleCloudPlatform/kubernetes/blob/a053dbc313572ed60d89dae9821ecab8bfd676dc/examples/node-selection/README.md) of this guide for instructions on how to manually set labels on a node.
Also, note that label keys must be in the form of DNS labels (as described in the [identifiers doc](../../docs/design/identifiers.md)), meaning that they are not allowed to contain any upper-case letters.
Also, note that label keys must be in the form of DNS labels (as described in the [identifiers doc](../../../docs/design/identifiers.md)), meaning that they are not allowed to contain any upper-case letters.
You can verify that it worked by re-running `kubectl get nodes` and checking that the node now has a label.
@ -75,5 +75,5 @@ While this example only covered one node, you can attach labels to as many nodes
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/node-selection/README.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/node-selection/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -24,17 +24,17 @@ Users can create and manage pods themselves, but Kubernetes drastically simplifi
Frequently it is useful to refer to a set of pods, for example to limit the set of pods on which a mutating operation should be performed, or that should be queried for status. As a general mechanism, users can attach to most Kubernetes API objects arbitrary key-value pairs called [labels](labels.md), and then use a set of label selectors (key-value queries over labels) to constrain the target of API operations. Each resource also has a map of string keys and values that can be used by external tooling to store and retrieve arbitrary metadata about this object, called [annotations](annotations.md).
Kubernetes supports a unique [networking model](admin/networking.md). Kubernetes encourages a flat address space and does not dynamically allocate ports, instead allowing users to select whichever ports are convenient for them. To achieve this, it allocates an IP address for each pod.
Kubernetes supports a unique [networking model](../admin/networking.md). Kubernetes encourages a flat address space and does not dynamically allocate ports, instead allowing users to select whichever ports are convenient for them. To achieve this, it allocates an IP address for each pod.
Modern Internet applications are commonly built by layering micro-services, for example a set of web front-ends talking to a distributed in-memory key-value store talking to a replicated storage service. To facilitate this architecture, Kubernetes offers the [service](services.md) abstraction, which provides a stable IP address and [DNS name](admin/dns.md) that corresponds to a dynamic set of pods such as the set of pods constituting a micro-service. The set is defined using a label selector and thus can refer to any set of pods. When a container running in a Kubernetes pod connects to this address, the connection is forwarded by a local agent (called the kube proxy) running on the source machine, to one of the corresponding back-end containers. The exact back-end is chosen using a round-robin policy to balance load. The kube proxy takes care of tracking the dynamic set of back-ends as pods are replaced by new pods on new hosts, so that the service IP address (and DNS name) never changes.
Modern Internet applications are commonly built by layering micro-services, for example a set of web front-ends talking to a distributed in-memory key-value store talking to a replicated storage service. To facilitate this architecture, Kubernetes offers the [service](services.md) abstraction, which provides a stable IP address and [DNS name](../admin/dns.md) that corresponds to a dynamic set of pods such as the set of pods constituting a micro-service. The set is defined using a label selector and thus can refer to any set of pods. When a container running in a Kubernetes pod connects to this address, the connection is forwarded by a local agent (called the kube proxy) running on the source machine, to one of the corresponding back-end containers. The exact back-end is chosen using a round-robin policy to balance load. The kube proxy takes care of tracking the dynamic set of back-ends as pods are replaced by new pods on new hosts, so that the service IP address (and DNS name) never changes.
Every resource in Kubernetes, such as a pod, is identified by a URI and has a UID. Important components of the URI are the kind of object (e.g. pod), the objects name, and the objects [namespace](namespaces.md). For a certain object kind, every name is unique within its namespace. In contexts where an object name is provided without a namespace, it is assumed to be in the default namespace. UID is unique across time and space.
Other details:
* [API](api.md)
* [Client libraries](client-libraries.md)
* [Command-line interface](user-guide/kubectl/kubectl.md)
* [API](../api.md)
* [Client libraries](../client-libraries.md)
* [Command-line interface](kubectl/kubectl.md)
* [UI](ui.md)
* [Images and registries](images.md)
* [Container environment](container-environment.md)
@ -43,5 +43,5 @@ Other details:
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/overview.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/overview.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -47,7 +47,7 @@ A `PersistentVolume` (PV) is a piece of networked storage in the cluster that ha
A `PersistentVolumeClaim` (PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g, can be mounted once read/write or many times read-only).
Please see the [detailed walkthrough with working examples](../examples/persistent-volumes/).
Please see the [detailed walkthrough with working examples](persistent-volumes/).
## Lifecycle of a volume and claim
@ -116,7 +116,7 @@ Each PV contains a spec and status, which is the specification and status of the
### Capacity
Generally, a PV will have a specific storage capacity. This is set using the PV's `capacity` attribute. See the Kubernetes [Resource Model](design/resources.md) to understand the units expected by `capacity`.
Generally, a PV will have a specific storage capacity. This is set using the PV's `capacity` attribute. See the Kubernetes [Resource Model](../design/resources.md) to understand the units expected by `capacity`.
Currently, storage size is the only resource that can be set or requested. Future attributes may include IOPS, throughput, etc.
@ -184,7 +184,7 @@ Claims use the same conventions as volumes when requesting storage with specific
### Resources
Claims, like pods, can request specific quantities of a resource. In this case, the request is for storage. The same [resource model](design/resources.md) applies to both volumes and claims.
Claims, like pods, can request specific quantities of a resource. In this case, the request is for storage. The same [resource model](../design/resources.md) applies to both volumes and claims.
## <a name="claims-as-volumes"></a> Claims As Volumes
@ -212,5 +212,5 @@ spec:
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/persistent-volumes.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/persistent-volumes.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View File

@ -114,5 +114,5 @@ Enjoy!
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/examples/persistent-volumes/README.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/user-guide/persistent-volumes/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

Some files were not shown because too many files have changed in this diff Show More