Merge pull request #12906 from errordeveloper/master

Make typography a bit more consistent
pull/6/head
Saad Ali 2015-08-19 11:06:30 -07:00
commit 0f2268560c
8 changed files with 63 additions and 63 deletions

View File

@ -80,8 +80,8 @@ uses host-private networking. It creates a virtual bridge, called `docker0` by
default, and allocates a subnet from one of the private address blocks defined
in [RFC1918](https://tools.ietf.org/html/rfc1918) for that bridge. For each
container that Docker creates, it allocates a virtual ethernet device (called
`veth`) which is attached to the bridge. The veth is mapped to appear as eth0
in the container, using Linux namespaces. The in-container eth0 interface is
`veth`) which is attached to the bridge. The veth is mapped to appear as `eth0`
in the container, using Linux namespaces. The in-container `eth0` interface is
given an IP address from the bridge's address range.
The result is that Docker containers can talk to other containers only if they
@ -147,7 +147,7 @@ here.
For the Google Compute Engine cluster configuration scripts, we use [advanced
routing](https://developers.google.com/compute/docs/networking#routing) to
assign each VM a subnet (default is /24 - 254 IPs). Any traffic bound for that
assign each VM a subnet (default is `/24` - 254 IPs). Any traffic bound for that
subnet will be routed directly to the VM by the GCE network fabric. This is in
addition to the "main" IP address assigned to the VM, which is NAT'ed for
outbound internet access. A linux bridge (called `cbr0`) is configured to exist

View File

@ -33,7 +33,7 @@ Documentation for other releases can be found at
# Kubernetes architecture
A running Kubernetes cluster contains node agents (kubelet) and master components (APIs, scheduler, etc), on top of a distributed storage solution. This diagram shows our desired eventual state, though we're still working on a few things, like making kubelet itself (all our components, really) run within containers, and making the scheduler 100% pluggable.
A running Kubernetes cluster contains node agents (`kubelet`) and master components (APIs, scheduler, etc), on top of a distributed storage solution. This diagram shows our desired eventual state, though we're still working on a few things, like making `kubelet` itself (all our components, really) run within containers, and making the scheduler 100% pluggable.
![Architecture Diagram](architecture.png?raw=true "Architecture overview")
@ -45,21 +45,21 @@ The Kubernetes node has the services necessary to run application containers and
Each node runs Docker, of course. Docker takes care of the details of downloading images and running containers.
### Kubelet
### `kubelet`
The **Kubelet** manages [pods](../user-guide/pods.md) and their containers, their images, their volumes, etc.
The `kubelet` manages [pods](../user-guide/pods.md) and their containers, their images, their volumes, etc.
### Kube-Proxy
### `kube-proxy`
Each node also runs a simple network proxy and load balancer (see the [services FAQ](https://github.com/GoogleCloudPlatform/kubernetes/wiki/Services-FAQ) for more details). This reflects `services` (see [the services doc](../user-guide/services.md) for more details) as defined in the Kubernetes API on each node and can do simple TCP and UDP stream forwarding (round robin) across a set of backends.
Service endpoints are currently found via [DNS](../admin/dns.md) or through environment variables (both [Docker-links-compatible](https://docs.docker.com/userguide/dockerlinks/) and Kubernetes {FOO}_SERVICE_HOST and {FOO}_SERVICE_PORT variables are supported). These variables resolve to ports managed by the service proxy.
Service endpoints are currently found via [DNS](../admin/dns.md) or through environment variables (both [Docker-links-compatible](https://docs.docker.com/userguide/dockerlinks/) and Kubernetes `{FOO}_SERVICE_HOST` and `{FOO}_SERVICE_PORT` variables are supported). These variables resolve to ports managed by the service proxy.
## The Kubernetes Control Plane
The Kubernetes control plane is split into a set of components. Currently they all run on a single _master_ node, but that is expected to change soon in order to support high-availability clusters. These components work together to provide a unified view of the cluster.
### etcd
### `etcd`
All persistent master state is stored in an instance of `etcd`. This provides a great way to store configuration data reliably. With `watch` support, coordinating components can be notified very quickly of changes.

View File

@ -137,8 +137,8 @@ GKE | | | GCE | [docs](https://clou
Vagrant | Saltstack | Fedora | OVS | [docs](vagrant.md) | [✓][2] | Project
GCE | Saltstack | Debian | GCE | [docs](gce.md) | [✓][1] | Project
Azure | CoreOS | CoreOS | Weave | [docs](coreos/azure/README.md) | | Community ([@errordeveloper](https://github.com/errordeveloper), [@squillace](https://github.com/squillace), [@chanezon](https://github.com/chanezon), [@crossorigin](https://github.com/crossorigin))
Docker Single Node | custom | N/A | local | [docs](docker.md) | | Project (@brendandburns)
Docker Multi Node | Flannel | N/A | local | [docs](docker-multinode.md) | | Project (@brendandburns)
Docker Single Node | custom | N/A | local | [docs](docker.md) | | Project ([@brendandburns](https://github.com/brendandburns))
Docker Multi Node | Flannel | N/A | local | [docs](docker-multinode.md) | | Project ([@brendandburns](https://github.com/brendandburns))
Bare-metal | Ansible | Fedora | flannel | [docs](fedora/fedora_ansible_config.md) | | Project
Bare-metal | custom | Fedora | _none_ | [docs](fedora/fedora_manual_config.md) | | Project
Bare-metal | custom | Fedora | flannel | [docs](fedora/flannel_multi_node_cluster.md) | | Community ([@aveshagarwal](https://github.com/aveshagarwal))
@ -147,26 +147,26 @@ KVM | custom | Fedora | flannel | [docs](fedora/flann
Mesos/Docker | custom | Ubuntu | Docker | [docs](mesos-docker.md) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md))
Mesos/GCE | | | | [docs](mesos.md) | | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md))
AWS | CoreOS | CoreOS | flannel | [docs](coreos.md) | | Community
GCE | CoreOS | CoreOS | flannel | [docs](coreos.md) | | Community [@pires](https://github.com/pires)
Vagrant | CoreOS | CoreOS | flannel | [docs](coreos.md) | | Community ( [@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles) )
Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](coreos/bare_metal_offline.md) | | Community([@jeffbean](https://github.com/jeffbean))
Bare-metal | CoreOS | CoreOS | Calico | [docs](coreos/bare_metal_calico.md) | | Community([@caseydavenport](https://github.com/caseydavenport))
CloudStack | Ansible | CoreOS | flannel | [docs](cloudstack.md) | | Community (@runseb)
Vmware | | Debian | OVS | [docs](vsphere.md) | | Community (@pietern)
Bare-metal | custom | CentOS | _none_ | [docs](centos/centos_manual_config.md) | | Community(@coolsvap)
GCE | CoreOS | CoreOS | flannel | [docs](coreos.md) | | Community ([@pires](https://github.com/pires))
Vagrant | CoreOS | CoreOS | flannel | [docs](coreos.md) | | Community ([@pires](https://github.com/pires), [@AntonioMeireles](https://github.com/AntonioMeireles))
Bare-metal (Offline) | CoreOS | CoreOS | flannel | [docs](coreos/bare_metal_offline.md) | | Community ([@jeffbean](https://github.com/jeffbean))
Bare-metal | CoreOS | CoreOS | Calico | [docs](coreos/bare_metal_calico.md) | | Community ([@caseydavenport](https://github.com/caseydavenport))
CloudStack | Ansible | CoreOS | flannel | [docs](cloudstack.md) | | Community ([@runseb](https://github.com/runseb))
Vmware | | Debian | OVS | [docs](vsphere.md) | | Community ([@pietern](https://github.com/pietern))
Bare-metal | custom | CentOS | _none_ | [docs](centos/centos_manual_config.md) | | Community ([@coolsvap](https://github.com/coolsvap))
AWS | Juju | Ubuntu | flannel | [docs](juju.md) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) )
OpenStack/HPCloud | Juju | Ubuntu | flannel | [docs](juju.md) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) )
Joyent | Juju | Ubuntu | flannel | [docs](juju.md) | | [Community](https://github.com/whitmo/bundle-kubernetes) ( [@whit](https://github.com/whitmo), [@matt](https://github.com/mbruzek), [@chuck](https://github.com/chuckbutler) )
AWS | Saltstack | Ubuntu | OVS | [docs](aws.md) | | Community (@justinsb)
Vmware | CoreOS | CoreOS | flannel | [docs](coreos.md) | | Community (@kelseyhightower)
AWS | Saltstack | Ubuntu | OVS | [docs](aws.md) | | Community ([@justinsb](https://github.com/justinsb))
Vmware | CoreOS | CoreOS | flannel | [docs](coreos.md) | | Community ([@kelseyhightower](https://github.com/kelseyhightower))
Azure | Saltstack | Ubuntu | OpenVPN | [docs](azure.md) | | Community
Bare-metal | custom | Ubuntu | Calico | [docs](ubuntu-calico.md) | | Community (@djosborne)
Bare-metal | custom | Ubuntu | flannel | [docs](ubuntu.md) | | Community (@resouer @WIZARD-CXY)
Local | | | _none_ | [docs](locally.md) | | Community (@preillyme)
libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](libvirt-coreos.md) | | Community (@lhuard1A)
oVirt | | | | [docs](ovirt.md) | | Community (@simon3z)
Rackspace | CoreOS | CoreOS | flannel | [docs](rackspace.md) | | Community (@doublerr)
any | any | any | any | [docs](scratch.md) | | Community (@erictune)
Bare-metal | custom | Ubuntu | Calico | [docs](ubuntu-calico.md) | | Community ([@djosborne](https://github.com/djosborne))
Bare-metal | custom | Ubuntu | flannel | [docs](ubuntu.md) | | Community ([@resouer](https://github.com/resouer), [@WIZARD-CXY](https://github.com/WIZARD-CXY))
Local | | | _none_ | [docs](locally.md) | | Community ([@preillyme](https://github.com/preillyme))
libvirt/KVM | CoreOS | CoreOS | libvirt/KVM | [docs](libvirt-coreos.md) | | Community ([@lhuard1A](https://github.com/lhuard1A))
oVirt | | | | [docs](ovirt.md) | | Community ([@simon3z](https://github.com/simon3z))
Rackspace | CoreOS | CoreOS | flannel | [docs](rackspace.md) | | Community ([@doublerr](https://github.com/doublerr))
any | any | any | any | [docs](scratch.md) | | Community ([@erictune](https://github.com/erictune))
*Note*: The above table is ordered by version test/used in notes followed by support level.

View File

@ -45,7 +45,7 @@ need the features they provide.
Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces.
Namespaces are a way to divide cluster resources between multiple uses (via [resource quota](../../docs/admin/resource-quota.md).
Namespaces are a way to divide cluster resources between multiple uses (via [resource quota])(../../docs/admin/resource-quota.md).
In future versions of Kubernetes, objects in the same namespace will have the same
access control policies by default.

View File

@ -105,7 +105,7 @@ data:
```
The data field is a map. Its keys must match
[DNS_SUBDOMAIN](../design/identifiers.md), except that leading dots are also
[`DNS_SUBDOMAIN`](../design/identifiers.md), except that leading dots are also
allowed. The values are arbitrary data, encoded using base64. The values of
username and password in the example above, before base64 encoding,
are `value-1` and `value-2`, respectively, with carriage return and newline characters at the end.
@ -167,8 +167,8 @@ Use of imagePullSecrets is described in the [images documentation](images.md#spe
*This feature is planned but not implemented. See [issue
9902](http://issue.k8s.io/9902).*
You can reference manually created secrets from a [service account](service-accounts.md).
Then, pods which use that service account will have
You can reference manually created secrets from a [Service Account](service-accounts.md).
Then, pods which use that Service Account will have
`volumeMounts` and/or `imagePullSecrets` added to them.
The secrets will be mounted at **TBD**.
@ -441,7 +441,7 @@ Both containers will have the following files present on their filesystems:
Note how the specs for the two pods differ only in one field; this facilitates
creating pods with different capabilities from a common pod config template.
You could further simplify the base pod specification by using two service accounts:
You could further simplify the base pod specification by using two Service Accounts:
one called, say, `prod-user` with the `prod-db-secret`, and one called, say,
`test-user` with the `test-db-secret`. Then, the pod spec can be shortened to, for example:

View File

@ -43,12 +43,12 @@ as recommended by the Kubernetes project. Your cluster administrator may have
customized the behavior in your cluster, in which case this documentation may
not apply.*
When you (a human) access the cluster (e.g. using kubectl), you are
When you (a human) access the cluster (e.g. using `kubectl`), you are
authenticated by the apiserver as a particular User Account (currently this is
usually "admin", unless your cluster administrator has customized your
usually `admin`, unless your cluster administrator has customized your
cluster). Processes in containers inside pods can also contact the apiserver.
When they do, they are authenticated as a particular Service Account (e.g.
"default").
`default`).
## Using the Default Service Account to access the API server.
@ -63,7 +63,7 @@ You can access the API using a proxy or with a client library, as described in
## Using Multiple Service Accounts.
Every namespace has a default service account resource called "default".
Every namespace has a default service account resource called `default`.
You can list this and any other serviceAccount resources in the namespace with this command:
```console

View File

@ -96,7 +96,7 @@ which redirects to the backend `Pods`.
A `Service` in Kubernetes is a REST object, similar to a `Pod`. Like all of the
REST objects, a `Service` definition can be POSTed to the apiserver to create a
new instance. For example, suppose you have a set of `Pods` that each expose
port 9376 and carry a label "app=MyApp".
port 9376 and carry a label `"app=MyApp"`.
```json
{
@ -121,7 +121,7 @@ port 9376 and carry a label "app=MyApp".
```
This specification will create a new `Service` object named "my-service" which
targets TCP port 9376 on any `Pod` with the "app=MyApp" label. This `Service`
targets TCP port 9376 on any `Pod` with the `"app=MyApp"` label. This `Service`
will also be assigned an IP address (sometimes called the "cluster IP"), which
is used by the service proxies (see below). The `Service`'s selector will be
evaluated continuously and the results will be POSTed to an `Endpoints` object
@ -266,7 +266,7 @@ request. To do this, set the `spec.clusterIP` field. For example, if you
already have an existing DNS entry that you wish to replace, or legacy systems
that are configured for a specific IP address and difficult to re-configure.
The IP address that a user chooses must be a valid IP address and within the
service-cluster-ip-range CIDR range that is specified by flag to the API
`service-cluster-ip-range` CIDR range that is specified by flag to the API
server. If the IP address value is invalid, the apiserver returns a 422 HTTP
status code to indicate that the value is invalid.
@ -299,7 +299,7 @@ compatible](https://docs.docker.com/userguide/dockerlinks/) variables (see
and simpler `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables,
where the Service name is upper-cased and dashes are converted to underscores.
For example, the Service "redis-master" which exposes TCP port 6379 and has been
For example, the Service `"redis-master"` which exposes TCP port 6379 and has been
allocated cluster IP address 10.0.0.11 produces the following environment
variables:
@ -325,17 +325,17 @@ DNS server watches the Kubernetes API for new `Services` and creates a set of
DNS records for each. If DNS has been enabled throughout the cluster then all
`Pods` should be able to do name resolution of `Services` automatically.
For example, if you have a `Service` called "my-service" in Kubernetes
`Namespace` "my-ns" a DNS record for "my-service.my-ns" is created. `Pods`
which exist in the "my-ns" `Namespace` should be able to find it by simply doing
a name lookup for "my-service". `Pods` which exist in other `Namespaces` must
qualify the name as "my-service.my-ns". The result of these name lookups is the
For example, if you have a `Service` called `"my-service"` in Kubernetes
`Namespace` `"my-ns"` a DNS record for `"my-service.my-ns"` is created. `Pods`
which exist in the `"my-ns"` `Namespace` should be able to find it by simply doing
a name lookup for `"my-service"`. `Pods` which exist in other `Namespaces` must
qualify the name as `"my-service.my-ns"`. The result of these name lookups is the
cluster IP.
Kubernetes also supports DNS SRV (service) records for named ports. If the
"my-service.my-ns" `Service` has a port named "http" with protocol `TCP`, you
can do a DNS SRV query for "_http._tcp.my-service.my-ns" to discover the port
number for "http".
`"my-service.my-ns"` `Service` has a port named `"http"` with protocol `TCP`, you
can do a DNS SRV query for `"_http._tcp.my-service.my-ns"` to discover the port
number for `"http"`.
## Headless services

View File

@ -91,9 +91,9 @@ medium that backs it, and the contents of it are determined by the particular
volume type used.
To use a volume, a pod specifies what volumes to provide for the pod (the
[spec.volumes](http://kubernetes.io/third_party/swagger-ui/#!/v1/createPod)
[`spec.volumes`](http://kubernetes.io/third_party/swagger-ui/#!/v1/createPod)
field) and where to mount those into containers(the
[spec.containers.volumeMounts](http://kubernetes.io/third_party/swagger-ui/#!/v1/createPod)
[`spec.containers.volumeMounts`](http://kubernetes.io/third_party/swagger-ui/#!/v1/createPod)
field).
A process in a container sees a filesystem view composed from their Docker
@ -107,17 +107,17 @@ mount each volume.
## Types of Volumes
Kubernetes supports several types of Volumes:
* emptyDir
* hostPath
* gcePersistentDisk
* awsElasticBlockStore
* nfs
* iscsi
* glusterfs
* rbd
* gitRepo
* secret
* persistentVolumeClaim
* `emptyDir`
* `hostPath`
* `gcePersistentDisk`
* `awsElasticBlockStore`
* `nfs`
* `iscsi`
* `glusterfs`
* `rbd`
* `gitRepo`
* `secret`
* `persistentVolumeClaim`
We welcome additional contributions.
@ -291,8 +291,8 @@ See the [NFS example](../../examples/nfs/) for more details.
For example, [this file](../../examples/nfs/nfs-web-pod.yaml) demonstrates how to
specify the usage of an NFS volume within a pod.
In this example one can see that a `volumeMount` called "nfs" is being mounted
onto `/var/www/html` in the container "web". The volume "nfs" is defined as
In this example one can see that a `volumeMount` called `nfs` is being mounted
onto `/var/www/html` in the container `web`. The volume "nfs" is defined as
type `nfs`, with the NFS server serving from `nfs-server.default.kube.local`
and exporting directory `/` as the share. The mount being created in this
example is writeable.