Address review comments from @bgrant0607.

pull/6/head
David Oppenheimer 2015-04-19 14:07:22 -07:00
parent bae12d6369
commit 377f7e9836
1 changed files with 4 additions and 3 deletions

View File

@ -47,10 +47,11 @@ If you want more control over the upgrading process, you may use the following w
1. Get the pods off the machine, via any of the following strategies:
1. wait for finite-duration pods to complete
1. delete pods with `kubectl delete pods $PODNAME`
l. for pods with a replication controller, the pod will eventually be rescheduled to a new node. additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod.
l. for pods with no replication controller, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it.
1. for pods with a replication controller, the pod will eventually be replaced by a new pod which will be scheduled to a new node. additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod.
1. for pods with no replication controller, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it.
1. Work on the node
1. Make the node schedulable again:
`kubectl update nodes $NODENAME --patch='{"apiVersion": "v1beta1", "unschedulable": false}'`.
If you deleted the node's VM instance and created a new one, then a new schedulable node resource will
be created automatically when you create a new VM instance. See [Node](node.md).
be created automatically when you create a new VM instance (if you're using a cloud provider that supports
node discovery; currently this is only GCE, not including CoreOS on GCE using kube-register). See [Node](node.md).