diff --git a/cluster/libvirt-coreos/user_data_master.yml b/cluster/libvirt-coreos/user_data_master.yml index a629c2165b..b23ea561c1 100644 --- a/cluster/libvirt-coreos/user_data_master.yml +++ b/cluster/libvirt-coreos/user_data_master.yml @@ -76,7 +76,7 @@ coreos: ExecStartPre=/bin/bash -c 'while [[ \"\$(curl -s http://127.0.0.1:8080/healthz)\" != \"ok\" ]]; do sleep 1; done' ExecStartPre=/bin/sleep 10 ExecStart=/opt/kubernetes/bin/kubectl create -f /opt/kubernetes/addons - ExecStop=/opt/kubernetes/bin/kubectl stop -f /opt/kubernetes/addons + ExecStop=/opt/kubernetes/bin/kubectl delete -f /opt/kubernetes/addons RemainAfterExit=yes [Install] diff --git a/docs/devel/flaky-tests.md b/docs/devel/flaky-tests.md index 3a7af51e4f..2470a8154a 100644 --- a/docs/devel/flaky-tests.md +++ b/docs/devel/flaky-tests.md @@ -87,10 +87,10 @@ done grep "Exited ([^0])" output.txt ``` -Eventually you will have sufficient runs for your purposes. At that point you can stop and delete the replication controller by running: +Eventually you will have sufficient runs for your purposes. At that point you can delete the replication controller by running: ```sh -kubectl stop replicationcontroller flakecontroller +kubectl delete replicationcontroller flakecontroller ``` If you do a final check for flakes with `docker ps -a`, ignore tasks that exited -1, since that's what happens when you stop the replication controller. diff --git a/docs/getting-started-guides/coreos/bare_metal_offline.md b/docs/getting-started-guides/coreos/bare_metal_offline.md index c32a517214..900753688a 100644 --- a/docs/getting-started-guides/coreos/bare_metal_offline.md +++ b/docs/getting-started-guides/coreos/bare_metal_offline.md @@ -699,7 +699,7 @@ List Kubernetes Kill all pods: - for i in `kubectl get pods | awk '{print $1}'`; do kubectl stop pod $i; done + for i in `kubectl get pods | awk '{print $1}'`; do kubectl delete pod $i; done diff --git a/docs/getting-started-guides/logging.md b/docs/getting-started-guides/logging.md index e738fd6c32..203d7e04ad 100644 --- a/docs/getting-started-guides/logging.md +++ b/docs/getting-started-guides/logging.md @@ -123,10 +123,10 @@ root 479 0.0 0.0 4348 812 ? S 00:05 0:00 sleep 1 root 480 0.0 0.0 15572 2212 ? R 00:05 0:00 ps aux ``` -What happens if for any reason the image in this pod is killed off and then restarted by Kubernetes? Will we still see the log lines from the previous invocation of the container followed by the log lines for the started container? Or will we lose the log lines from the original container’s execution and only see the log lines for the new container? Let’s find out. First let’s stop the currently running counter. +What happens if for any reason the image in this pod is killed off and then restarted by Kubernetes? Will we still see the log lines from the previous invocation of the container followed by the log lines for the started container? Or will we lose the log lines from the original container’s execution and only see the log lines for the new container? Let’s find out. First let’s delete the currently running counter. ```console -$ kubectl stop pod counter +$ kubectl delete pod counter pods/counter ``` diff --git a/docs/user-guide/simple-nginx.md b/docs/user-guide/simple-nginx.md index 680c12e424..c3931e9900 100644 --- a/docs/user-guide/simple-nginx.md +++ b/docs/user-guide/simple-nginx.md @@ -59,10 +59,10 @@ You can also see the replication controller that was created: kubectl get rc ``` -To stop the two replicated containers, stop the replication controller: +To stop the two replicated containers, delete the replication controller: ```bash -kubectl stop rc my-nginx +kubectl delete rc my-nginx ``` ### Exposing your pods to the internet. diff --git a/examples/guestbook/README.md b/examples/guestbook/README.md index 2cfb1c34d3..cbbc5b9b5b 100644 --- a/examples/guestbook/README.md +++ b/examples/guestbook/README.md @@ -622,10 +622,10 @@ For Google Compute Engine details about limiting traffic to specific sources, se ### Step Seven: Cleanup -If you are in a live kubernetes cluster, you can just kill the pods by stopping the replication controllers and deleting the services. Using labels to select the resources to stop or delete is an easy way to do this in one command. +If you are in a live kubernetes cluster, you can just kill the pods by deleting the replication controllers and the services. Using labels to select the resources to stop or delete is an easy way to do this in one command. ```console -kubectl stop rc -l "name in (redis-master, redis-slave, frontend)" +kubectl delete rc -l "name in (redis-master, redis-slave, frontend)" kubectl delete service -l "name in (redis-master, redis-slave, frontend)" ``` diff --git a/examples/phabricator/teardown.sh b/examples/phabricator/teardown.sh index 898c09101f..40c21357db 100755 --- a/examples/phabricator/teardown.sh +++ b/examples/phabricator/teardown.sh @@ -15,6 +15,6 @@ # limitations under the License. echo "Deleting Phabricator service" && kubectl delete -f phabricator-service.json -echo "Deleting Phabricator replication controller" && kubectl stop rc phabricator-controller +echo "Deleting Phabricator replication controller" && kubectl delete rc phabricator-controller echo "Delete firewall rule" && gcloud compute firewall-rules delete -q phabricator-node-80 diff --git a/examples/vitess/etcd-down.sh b/examples/vitess/etcd-down.sh index b301a469c7..4286533315 100755 --- a/examples/vitess/etcd-down.sh +++ b/examples/vitess/etcd-down.sh @@ -27,8 +27,8 @@ cells=`echo $CELLS | tr ',' ' '` # Delete replication controllers for cell in 'global' $cells; do - echo "Stopping etcd replicationcontroller for $cell cell..." - $KUBECTL stop replicationcontroller etcd-$cell + echo "Deleting etcd replicationcontroller for $cell cell..." + $KUBECTL delete replicationcontroller etcd-$cell echo "Deleting etcd service for $cell cell..." $KUBECTL delete service etcd-$cell diff --git a/examples/vitess/guestbook-down.sh b/examples/vitess/guestbook-down.sh index 57a6978c8a..99d4d656f7 100755 --- a/examples/vitess/guestbook-down.sh +++ b/examples/vitess/guestbook-down.sh @@ -21,8 +21,8 @@ set -e script_root=`dirname "${BASH_SOURCE}"` source $script_root/env.sh -echo "Stopping guestbook replicationcontroller..." -$KUBECTL stop replicationcontroller guestbook +echo "Deleting guestbook replicationcontroller..." +$KUBECTL delete replicationcontroller guestbook echo "Deleting guestbook service..." $KUBECTL delete service guestbook diff --git a/examples/vitess/vtctld-down.sh b/examples/vitess/vtctld-down.sh index 0ff1a66c38..72e05b8be7 100755 --- a/examples/vitess/vtctld-down.sh +++ b/examples/vitess/vtctld-down.sh @@ -21,8 +21,8 @@ set -e script_root=`dirname "${BASH_SOURCE}"` source $script_root/env.sh -echo "Stopping vtctld replicationcontroller..." -$KUBECTL stop replicationcontroller vtctld +echo "Deleting vtctld replicationcontroller..." +$KUBECTL delete replicationcontroller vtctld echo "Deleting vtctld service..." $KUBECTL delete service vtctld diff --git a/examples/vitess/vtgate-down.sh b/examples/vitess/vtgate-down.sh index 0ac15d6ca6..404b258f35 100755 --- a/examples/vitess/vtgate-down.sh +++ b/examples/vitess/vtgate-down.sh @@ -21,8 +21,8 @@ set -e script_root=`dirname "${BASH_SOURCE}"` source $script_root/env.sh -echo "Stopping vtgate replicationcontroller..." -$KUBECTL stop replicationcontroller vtgate +echo "Deleting vtgate replicationcontroller..." +$KUBECTL delete replicationcontroller vtgate echo "Deleting vtgate service..." $KUBECTL delete service vtgate diff --git a/hack/test-cmd.sh b/hack/test-cmd.sh index e1d0f3fcbc..3777095773 100755 --- a/hack/test-cmd.sh +++ b/hack/test-cmd.sh @@ -385,7 +385,7 @@ runTests() { # Pre-condition: valid-pod and redis-proxy PODs are running kube::test::get_object_assert pods "{{range.items}}{{$id_field}}:{{end}}" 'redis-proxy:valid-pod:' # Command - kubectl stop pods valid-pod redis-proxy "${kube_flags[@]}" --grace-period=0 # stop multiple pods at once + kubectl delete pods valid-pod redis-proxy "${kube_flags[@]}" --grace-period=0 # delete multiple pods at once # Post-condition: no POD is running kube::test::get_object_assert pods "{{range.items}}{{$id_field}}:{{end}}" '' @@ -720,7 +720,7 @@ __EOF__ kube::test::get_object_assert rc "{{range.items}}{{$id_field}}:{{end}}" '' # Command kubectl create -f examples/guestbook/frontend-controller.yaml "${kube_flags[@]}" - kubectl stop rc frontend "${kube_flags[@]}" + kubectl delete rc frontend "${kube_flags[@]}" # Post-condition: no pods from frontend controller kube::test::get_object_assert 'pods -l "name=frontend"' "{{range.items}}{{$id_field}}:{{end}}" '' @@ -841,7 +841,7 @@ __EOF__ # Pre-condition: frontend replication controller is running kube::test::get_object_assert rc "{{range.items}}{{$id_field}}:{{end}}" 'frontend:' # Command - kubectl stop rc frontend "${kube_flags[@]}" + kubectl delete rc frontend "${kube_flags[@]}" # Post-condition: no replication controller is running kube::test::get_object_assert rc "{{range.items}}{{$id_field}}:{{end}}" '' @@ -858,7 +858,7 @@ __EOF__ # Pre-condition: frontend and redis-slave kube::test::get_object_assert rc "{{range.items}}{{$id_field}}:{{end}}" 'frontend:redis-slave:' # Command - kubectl stop rc frontend redis-slave "${kube_flags[@]}" # delete multiple controllers at once + kubectl delete rc frontend redis-slave "${kube_flags[@]}" # delete multiple controllers at once # Post-condition: no replication controller is running kube::test::get_object_assert rc "{{range.items}}{{$id_field}}:{{end}}" '' diff --git a/test/scalability/counter/Makefile b/test/scalability/counter/Makefile index 91fab0eb30..c056db8c00 100644 --- a/test/scalability/counter/Makefile +++ b/test/scalability/counter/Makefile @@ -24,4 +24,4 @@ counter5000: kubectl scale rc counter --replicas=5000 stop: - kubectl stop rc counter + kubectl delete rc counter diff --git a/test/soak/cauldron/Makefile b/test/soak/cauldron/Makefile index 8819e4711a..626fb29879 100644 --- a/test/soak/cauldron/Makefile +++ b/test/soak/cauldron/Makefile @@ -15,7 +15,7 @@ rc: kubectl create --validate -f cauldron-rc.yaml stop: - kubectl stop rc cauldron + kubectl delete rc cauldron get: kubectl get rc,pods -l app=cauldron