diff --git a/docs/cluster_management.md b/docs/cluster_management.md index b9076877fc..3b14fb6385 100644 --- a/docs/cluster_management.md +++ b/docs/cluster_management.md @@ -46,11 +46,11 @@ If you want more control over the upgrading process, you may use the following w This keeps new pods from landing on the node while you are trying to get them off. 1. Get the pods off the machine, via any of the following strategies: 1. wait for finite-duration pods to complete - 1. for pods with a replication controller, delete the pod with `kubectl delete pods $PODNAME` - 1. for pods which are not replicated, bring up a new copy of the pod, and redirect clients to it. + 1. delete pods with `kubectl delete pods $PODNAME` + l. for pods with a replication controller, the pod will eventually be rescheduled to a new node. additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod. + l. for pods with no replication controller, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it. 1. Work on the node 1. Make the node schedulable again: `kubectl update nodes $NODENAME --patch='{"apiVersion": "v1beta1", "unschedulable": false}'`. - Or, if you deleted the VM instance and created a new one, and are using `--sync_nodes=true` on the apiserver - (the default), then a new schedulable node resource will be created automatically when you create a new - VM instance. See [Node](node.md). + If you deleted the node's VM instance and created a new one, then a new schedulable node resource will + be created automatically when you create a new VM instance. See [Node](node.md).