From 377f7e983621b1a0343e70ab46dc39485e7f2e5e Mon Sep 17 00:00:00 2001 From: David Oppenheimer Date: Sun, 19 Apr 2015 14:07:22 -0700 Subject: [PATCH] Address review comments from @bgrant0607. --- docs/cluster_management.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/docs/cluster_management.md b/docs/cluster_management.md index 3b14fb6385..265f00f362 100644 --- a/docs/cluster_management.md +++ b/docs/cluster_management.md @@ -47,10 +47,11 @@ If you want more control over the upgrading process, you may use the following w 1. Get the pods off the machine, via any of the following strategies: 1. wait for finite-duration pods to complete 1. delete pods with `kubectl delete pods $PODNAME` - l. for pods with a replication controller, the pod will eventually be rescheduled to a new node. additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod. - l. for pods with no replication controller, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it. + 1. for pods with a replication controller, the pod will eventually be replaced by a new pod which will be scheduled to a new node. additionally, if the pod is part of a service, then clients will automatically be redirected to the new pod. + 1. for pods with no replication controller, you need to bring up a new copy of the pod, and assuming it is not part of a service, redirect clients to it. 1. Work on the node 1. Make the node schedulable again: `kubectl update nodes $NODENAME --patch='{"apiVersion": "v1beta1", "unschedulable": false}'`. If you deleted the node's VM instance and created a new one, then a new schedulable node resource will - be created automatically when you create a new VM instance. See [Node](node.md). + be created automatically when you create a new VM instance (if you're using a cloud provider that supports + node discovery; currently this is only GCE, not including CoreOS on GCE using kube-register). See [Node](node.md).