Merge pull request #9907 from RichieEscarez/9404_controller

Changed "controller" to "replication controller"
pull/6/head
Satnam Singh 2015-06-18 14:33:49 -07:00
commit 424d09fb97
12 changed files with 53 additions and 52 deletions

View File

@ -28,9 +28,9 @@ Kinds are grouped into three categories:
Creating an API object is a record of intent - once created, the system will work to ensure that resource exists. All API objects have common metadata.
An object may have multiple resources that clients can use to perform specific actions than create, update, delete, or get.
An object may have multiple resources that clients can use to perform specific actions that create, update, delete, or get.
Examples: Pods, ReplicationControllers, Services, Namespaces, Nodes
Examples: `Pods`, `ReplicationControllers`, `Services`, `Namespaces`, `Nodes`
2. **Lists** are collections of **resources** of one (usually) or more (occasionally) kinds.
@ -301,13 +301,14 @@ Late Initialization
Late initialization is when resource fields are set by a system controller
after an object is created/updated.
For example, the scheduler sets the pod.spec.nodeName field after the pod is created.
For example, the scheduler sets the `pod.spec.nodeName` field after the pod is created.
Late-initializers should only make the following types of modifications:
- Setting previously unset fields
- Adding keys to maps
- Adding values to arrays which have mergeable semantics (`patchStrategy:"merge"` attribute in
go definition of type).
- Setting previously unset fields
- Adding keys to maps
- Adding values to arrays which have mergeable semantics (`patchStrategy:"merge"` attribute in
the type definition).
These conventions:
1. allow a user (with sufficient privilege) to override any system-default behaviors by setting
the fields that would otherwise have been defaulted.
@ -318,7 +319,7 @@ These conventions:
Although the apiserver Admission Control stage acts prior to object creation,
Admission Control plugins should follow the Late Initialization conventions
too, to allow their implementation to be later moved to a controller, or to client libraries.
too, to allow their implementation to be later moved to a 'controller', or to client libraries.
Concurrency Control and Consistency
-----------------------------------

View File

@ -193,7 +193,7 @@ K8s authorization should:
- Allow for a range of maturity levels, from single-user for those test driving the system, to integration with existing to enterprise authorization systems.
- Allow for centralized management of users and policies. In some organizations, this will mean that the definition of users and access policies needs to reside on a system other than k8s and encompass other web services (such as a storage service).
- Allow processes running in K8s Pods to take on identity, and to allow narrow scoping of permissions for those identities in order to limit damage from software faults.
- Have Authorization Policies exposed as API objects so that a single config file can create or delete Pods, Controllers, Services, and the identities and policies for those Pods and Controllers.
- Have Authorization Policies exposed as API objects so that a single config file can create or delete Pods, Replication Controllers, Services, and the identities and policies for those Pods and Replication Controllers.
- Be separate as much as practical from Authentication, to allow Authentication methods to change over time and space, without impacting Authorization policies.
K8s will implement a relatively simple

View File

@ -5,7 +5,7 @@
Processes in Pods may need to call the Kubernetes API. For example:
- scheduler
- replication controller
- minion controller
- node controller
- a map-reduce type framework which has a controller that then tries to make a dynamically determined number of workers and watch them
- continuous build and push system
- monitoring system

View File

@ -8,20 +8,20 @@ Assume that we have a current replication controller named ```foo``` and it is r
```kubectl rolling-update rc foo [foo-v2] --image=myimage:v2```
If the user doesn't specify a name for the 'next' controller, then the 'next' controller is renamed to
the name of the original controller.
If the user doesn't specify a name for the 'next' replication controller, then the 'next' replication controller is renamed to
the name of the original replication controller.
Obviously there is a race here, where if you kill the client between delete foo, and creating the new version of 'foo' you might be surprised about what is there, but I think that's ok.
See [Recovery](#recovery) below
If the user does specify a name for the 'next' controller, then the 'next' controller is retained with its existing name,
and the old 'foo' controller is deleted. For the purposes of the rollout, we add a unique-ifying label ```kubernetes.io/deployment``` to both the ```foo``` and ```foo-next``` controllers.
The value of that label is the hash of the complete JSON representation of the```foo-next``` or```foo``` controller. The name of this label can be overridden by the user with the ```--deployment-label-key``` flag.
If the user does specify a name for the 'next' replication controller, then the 'next' replication controller is retained with its existing name,
and the old 'foo' replication controller is deleted. For the purposes of the rollout, we add a unique-ifying label ```kubernetes.io/deployment``` to both the ```foo``` and ```foo-next``` replication controllers.
The value of that label is the hash of the complete JSON representation of the```foo-next``` or```foo``` replication controller. The name of this label can be overridden by the user with the ```--deployment-label-key``` flag.
#### Recovery
If a rollout fails or is terminated in the middle, it is important that the user be able to resume the roll out.
To facilitate recovery in the case of a crash of the updating process itself, we add the following annotations to each replicaController in the ```kubernetes.io/``` annotation namespace:
* ```desired-replicas``` The desired number of replicas for this controller (either N or zero)
To facilitate recovery in the case of a crash of the updating process itself, we add the following annotations to each replication controller in the ```kubernetes.io/``` annotation namespace:
* ```desired-replicas``` The desired number of replicas for this replication controller (either N or zero)
* ```update-partner``` A pointer to the replication controller resource that is the other half of this update (syntax ```<name>``` the namespace is assumed to be identical to the namespace of this replication controller.)
Recovery is achieved by issuing the same command again:

View File

@ -7,9 +7,9 @@ Perform a rolling update of the given ReplicationController.
Perform a rolling update of the given ReplicationController.
Replaces the specified controller with new controller, updating one pod at a time to use the
Replaces the specified replication controller with a new replication controller by updating one pod at a time to use the
new PodTemplate. The new-controller.json must specify the same namespace as the
existing controller and overwrite at least one (common) label in its replicaSelector.
existing replication controller and overwrite at least one (common) label in its replicaSelector.
```
kubectl rolling-update OLD_CONTROLLER_NAME ([NEW_CONTROLLER_NAME] --image=NEW_CONTAINER_IMAGE | -f NEW_CONTROLLER_SPEC)
@ -18,7 +18,7 @@ kubectl rolling-update OLD_CONTROLLER_NAME ([NEW_CONTROLLER_NAME] --image=NEW_CO
### Examples
```
// Update pods of frontend-v1 using new controller data in frontend-v2.json.
// Update pods of frontend-v1 using new replication controller data in frontend-v2.json.
$ kubectl rolling-update frontend-v1 -f frontend-v2.json
// Update pods of frontend-v1 using JSON data passed into stdin.
@ -38,16 +38,16 @@ $ kubectl rolling-update frontend --image=image:v2
```
--deployment-label-key="deployment": The key to use to differentiate between two different controllers, default 'deployment'. Only relevant when --image is specified, ignored otherwise
--dry-run=false: If true, print out the changes that would be made, but don't actually make them.
-f, --filename="": Filename or URL to file to use to create the new controller.
-f, --filename="": Filename or URL to file to use to create the new replication controller.
-h, --help=false: help for rolling-update
--image="": Image to upgrade the controller to. Can not be used with --filename/-f
--image="": Image to use for upgrading the replication controller. Can not be used with --filename/-f
--no-headers=false: When using the default output, don't print headers.
-o, --output="": Output format. One of: json|yaml|template|templatefile.
--output-version="": Output the formatted object with the given version (default api-version).
--poll-interval="3s": Time delay between polling controller status after update. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
--poll-interval="3s": Time delay between polling for replication controller status after the update. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
--rollback=false: If true, this is a request to abort an existing rollout that is partially rolled out. It effectively reverses current and next and runs a rollout
-t, --template="": Template string or path to template file to use when -o=template or -o=templatefile. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview]
--timeout="5m0s": Max time to wait for a controller to update before giving up. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
--timeout="5m0s": Max time to wait for a replication controller to update before giving up. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
--update-period="1m0s": Time to wait between updating pods. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
```
@ -83,6 +83,6 @@ $ kubectl rolling-update frontend --image=image:v2
### SEE ALSO
* [kubectl](kubectl.md) - kubectl controls the Kubernetes cluster manager
###### Auto generated by spf13/cobra at 2015-05-21 10:33:11.184123104 +0000 UTC
###### Auto generated by spf13/cobra at 2015-06-17 14:57:27.791796674 +0000 UTC
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/kubectl_rolling-update.md?pixel)]()

View File

@ -16,9 +16,9 @@ kubectl rolling\-update \- Perform a rolling update of the given ReplicationCont
Perform a rolling update of the given ReplicationController.
.PP
Replaces the specified controller with new controller, updating one pod at a time to use the
Replaces the specified replication controller with a new replication controller by updating one pod at a time to use the
new PodTemplate. The new\-controller.json must specify the same namespace as the
existing controller and overwrite at least one (common) label in its replicaSelector.
existing replication controller and overwrite at least one (common) label in its replicaSelector.
.SH OPTIONS
@ -32,7 +32,7 @@ existing controller and overwrite at least one (common) label in its replicaSele
.PP
\fB\-f\fP, \fB\-\-filename\fP=""
Filename or URL to file to use to create the new controller.
Filename or URL to file to use to create the new replication controller.
.PP
\fB\-h\fP, \fB\-\-help\fP=false
@ -40,7 +40,7 @@ existing controller and overwrite at least one (common) label in its replicaSele
.PP
\fB\-\-image\fP=""
Image to upgrade the controller to. Can not be used with \-\-filename/\-f
Image to use for upgrading the replication controller. Can not be used with \-\-filename/\-f
.PP
\fB\-\-no\-headers\fP=false
@ -56,7 +56,7 @@ existing controller and overwrite at least one (common) label in its replicaSele
.PP
\fB\-\-poll\-interval\fP="3s"
Time delay between polling controller status after update. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
Time delay between polling for replication controller status after the update. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
.PP
\fB\-\-rollback\fP=false
@ -69,7 +69,7 @@ existing controller and overwrite at least one (common) label in its replicaSele
.PP
\fB\-\-timeout\fP="5m0s"
Max time to wait for a controller to update before giving up. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
Max time to wait for a replication controller to update before giving up. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
.PP
\fB\-\-update\-period\fP="1m0s"
@ -179,7 +179,7 @@ existing controller and overwrite at least one (common) label in its replicaSele
.RS
.nf
// Update pods of frontend\-v1 using new controller data in frontend\-v2.json.
// Update pods of frontend\-v1 using new replication controller data in frontend\-v2.json.
$ kubectl rolling\-update frontend\-v1 \-f frontend\-v2.json
// Update pods of frontend\-v1 using JSON data passed into stdin.

View File

@ -97,7 +97,7 @@ Sometimes more complex policies may be desired, such as:
limit to prevent accidental resource exhaustion.
Such policies could be implemented using ResourceQuota as a building-block, by
writing a controller which watches the quota usage and adjusts the quota
writing a 'controller' which watches the quota usage and adjusts the quota
hard limits of each namespace.

View File

@ -103,17 +103,17 @@ start until all the pod's volumes are mounted.
Once the kubelet has started a pod's containers, its secret volumes will not
change, even if the secret resource is modified. To change the secret used,
the original pod must be deleted, and a new pod (perhaps with an identical
PodSpec) must be created. Therefore, updating a secret follows the same
`PodSpec`) must be created. Therefore, updating a secret follows the same
workflow as deploying a new container image. The `kubectl rolling-update`
command can be used ([man page](kubectl_rolling-update.md)).
The resourceVersion of the secret is not specified when it is referenced.
The `resourceVersion` of the secret is not specified when it is referenced.
Therefore, if a secret is updated at about the same time as pods are starting,
then it is not defined which version of the secret will be used for the pod. It
is not possible currently to check what resource version of a secret object was
used when a pod was created. It is planned that pods will report this
information, so that a controller could restart ones using a old
resourceVersion. In the interim, if this is a concern, it is recommended to not
information, so that a replication controller restarts ones using an old
`resourceVersion`. In the interim, if this is a concern, it is recommended to not
update the data of existing secrets, but to create new ones with distinct names.
## Use cases

View File

@ -128,7 +128,7 @@ Of course, a single node cluster isn't particularly interesting. The real power
In Kubernetes a _[Replication Controller](../../docs/replication-controller.md)_ is responsible for replicating sets of identical pods. Like a _Service_ it has a selector query which identifies the members of it's set. Unlike a _Service_ it also has a desired number of replicas, and it will create or delete _Pods_ to ensure that the number of _Pods_ matches up with it's desired state.
Replication Controllers will "adopt" existing pods that match their selector query, so let's create a Replication Controller with a single replica to adopt our existing Cassandra Pod.
Replication controllers will "adopt" existing pods that match their selector query, so let's create a replication controller with a single replica to adopt our existing Cassandra pod.
```yaml
apiVersion: v1
@ -172,7 +172,7 @@ spec:
emptyDir: {}
```
The bulk of the replication controller config is actually identical to the Cassandra pod declaration above, it simply gives the controller a recipe to use when creating new pods. The other parts are the ```replicaSelector``` which contains the controller's selector query, and the ```replicas``` parameter which specifies the desired number of replicas, in this case 1.
Most of this replication controller definition is identical to the Cassandra pod definition above, it simply gives the resplication controller a recipe to use when it creates new Cassandra pods. The other differentiating parts are the ```selector``` attribute which contains the controller's selector query, and the ```replicas``` attribute which specifies the desired number of replicas, in this case 1.
Create this controller:

View File

@ -38,7 +38,7 @@ I0218 15:18:31.623279 67480 proxy.go:36] Starting to serve on localhost:8001
Now visit the the [demo website](http://localhost:8001/static). You won't see anything much quite yet.
### Step Two: Run the controller
### Step Two: Run the replication controller
Now we will turn up two replicas of an image. They all serve on internal port 80.
```bash
@ -47,7 +47,7 @@ $ ./kubectl create -f examples/update-demo/nautilus-rc.yaml
After pulling the image from the Docker Hub to your worker nodes (which may take a minute or so) you'll see a couple of squares in the UI detailing the pods that are running along with the image that they are serving up. A cute little nautilus.
### Step Three: Try scaling the controller
### Step Three: Try scaling the replication controller
Now we will increase the number of replicas from two to four:
@ -76,7 +76,7 @@ Watch the [demo website](http://localhost:8001/static/index.html), it will updat
$ ./kubectl stop rc update-demo-kitten
```
This will first 'stop' the replication controller by turning the target number of replicas to 0. It'll then delete that controller.
This first stops the replication controller by turning the target number of replicas to 0 and then deletes the controller.
### Step Six: Cleanup

View File

@ -4,11 +4,11 @@ metadata:
name: nginx-controller
spec:
replicas: 2
# selector identifies the set of Pods that this
# replicaController is responsible for managing
# selector identifies the set of pods that this
# replication controller is responsible for managing
selector:
name: nginx
# podTemplate defines the 'cookie cutter' used for creating
# template defines the 'cookie cutter' used for creating
# new pods when necessary
template:
metadata:

View File

@ -38,10 +38,10 @@ const (
pollInterval = "3s"
rollingUpdate_long = `Perform a rolling update of the given ReplicationController.
Replaces the specified controller with new controller, updating one pod at a time to use the
Replaces the specified replication controller with a new replication controller by updating one pod at a time to use the
new PodTemplate. The new-controller.json must specify the same namespace as the
existing controller and overwrite at least one (common) label in its replicaSelector.`
rollingUpdate_example = `// Update pods of frontend-v1 using new controller data in frontend-v2.json.
existing replication controller and overwrite at least one (common) label in its replicaSelector.`
rollingUpdate_example = `// Update pods of frontend-v1 using new replication controller data in frontend-v2.json.
$ kubectl rolling-update frontend-v1 -f frontend-v2.json
// Update pods of frontend-v1 using JSON data passed into stdin.
@ -70,10 +70,10 @@ func NewCmdRollingUpdate(f *cmdutil.Factory, out io.Writer) *cobra.Command {
},
}
cmd.Flags().String("update-period", updatePeriod, `Time to wait between updating pods. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".`)
cmd.Flags().String("poll-interval", pollInterval, `Time delay between polling controller status after update. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".`)
cmd.Flags().String("timeout", timeout, `Max time to wait for a controller to update before giving up. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".`)
cmd.Flags().StringP("filename", "f", "", "Filename or URL to file to use to create the new controller.")
cmd.Flags().String("image", "", "Image to upgrade the controller to. Can not be used with --filename/-f")
cmd.Flags().String("poll-interval", pollInterval, `Time delay between polling for replication controller status after the update. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".`)
cmd.Flags().String("timeout", timeout, `Max time to wait for a replication controller to update before giving up. Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".`)
cmd.Flags().StringP("filename", "f", "", "Filename or URL to file to use to create the new replication controller.")
cmd.Flags().String("image", "", "Image to use for upgrading the replication controller. Can not be used with --filename/-f")
cmd.Flags().String("deployment-label-key", "deployment", "The key to use to differentiate between two different controllers, default 'deployment'. Only relevant when --image is specified, ignored otherwise")
cmd.Flags().Bool("dry-run", false, "If true, print out the changes that would be made, but don't actually make them.")
cmd.Flags().Bool("rollback", false, "If true, this is a request to abort an existing rollout that is partially rolled out. It effectively reverses current and next and runs a rollout")