Fixed some typos:

Changed "an unified" to "a unified"
Changed "a extra" to "an extra"
Changed "for each pod have" to "for each pod to have"
pull/6/head
CJ Cullen 2014-08-20 09:49:50 -07:00
parent 0db7989809
commit 4a2c3c8c87
3 changed files with 3 additions and 3 deletions

View File

@ -102,7 +102,7 @@ Service endpoints are currently found through [Docker-links-compatible](https://
## The Kubernetes Control Plane
The Kubernetes control plane is split into a set of components, but they all run on a single _master_ node. These work together to provide an unified view of the cluster.
The Kubernetes control plane is split into a set of components, but they all run on a single _master_ node. These work together to provide a unified view of the cluster.
### etcd

View File

@ -20,7 +20,7 @@ An alternative we considered was an additional layer of addressing: pod-centric
## Current implementation
For the Google Compute Engine cluster configuration scripts, [advanced routing](https://developers.google.com/compute/docs/networking#routing) is set up so that each VM has a extra 256 IP addresses that get routed to it. This is in addition to the 'main' IP address assigned to the VM that is NAT-ed for Internet access. The networking bridge (called `cbr0` to differentiate it from `docker0`) is set up outside of Docker proper and only does NAT for egress network traffic that isn't aimed at the virtual network.
For the Google Compute Engine cluster configuration scripts, [advanced routing](https://developers.google.com/compute/docs/networking#routing) is set up so that each VM has an extra 256 IP addresses that get routed to it. This is in addition to the 'main' IP address assigned to the VM that is NAT-ed for Internet access. The networking bridge (called `cbr0` to differentiate it from `docker0`) is set up outside of Docker proper and only does NAT for egress network traffic that isn't aimed at the virtual network.
Ports mapped in from the 'main IP' (and hence the internet if the right firewall rules are set up) are proxied in user mode by Docker. In the future, this should be done with `iptables` by either the Kubelet or Docker: [Issue #15](https://github.com/GoogleCloudPlatform/kubernetes/issues/15).

View File

@ -6,7 +6,7 @@ Why doesn't Kubernetes just support an affinity mechanism for co-scheduling cont
In addition to defining the containers that run in the pod, the pod specifies a set of shared storage volumes. Pods facilitate data sharing and IPC among their constituents. In the future, they may share CPU and/or memory ([LPC2013](http://www.linuxplumbersconf.org/2013/ocw//system/presentations/1239/original/lmctfy%20(1).pdf)).
The containers in the pod also all use the same network namespace/IP (and port space). The goal is for each pod have an IP address in a flat shared networking namespace that has full communication with other physical computers and containers across the network. [More details on networking](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/networking.md).
The containers in the pod also all use the same network namespace/IP (and port space). The goal is for each pod to have an IP address in a flat shared networking namespace that has full communication with other physical computers and containers across the network. [More details on networking](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/networking.md).
While pods can be used to host vertically integrated application stacks, their primary motivation is to support co-located, co-managed helper programs, such as:
- content management systems, file and data loaders, local cache managers, etc.