Automatic merge from submit-queue (batch tested with PRs 53474, 54258, 54356). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
[cluster/centos] fix https
**What this PR does / why we need it**:
[cluster/centos] fix https
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```NONE
```
Automatic merge from submit-queue (batch tested with PRs 53474, 54258, 54356). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Directly using std{in,out} for test helper subproc
**What this PR does / why we need it**
This fixes one TODO in the code of a test helper and is an extremely minor improvement
**Which issue this PR fixes**
Fixes issue #54258
**Special notes for your reviewer**
I'm using this to familiarize myself with the Kubernetes contribution process while being helpful in the process.
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
client-go/examples: Update CRUD Deployment sample
**What this PR does / why we need it**:
PR motivated by [#128](https://github.com/kubernetes/client-go/issues/128), namely updating the CRUD example with the following:
- Add new step which demonstrates rolling back deployments
- Cleanup retry loop for `Update()` steps
- Make `-kubeconfig` flag optional when running example (same as out-of-cluster example)
- Update `README.md` to reflect changes
**Special notes for your reviewer**:
My first Kubernetes contribution- feedback very welcome!
**Release note**:
```release-note
NONE
```
/cc @ahmetb @caesarxuchao
Automatic merge from submit-queue (batch tested with PRs 54438, 54508). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix hyperkube kubelet --experimental-dockershim
**What this PR does / why we need it**:
This makes hyperkube kubelet command support `--experimental-dockershim` flag.
**Which issue this PR fixes** fixes#54424
**Release note**:
```release-note
Fix hyperkube kubelet --experimental-dockershim
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fixing usage of clustered datastore name to be absolute datastore name
**What this PR does / why we need it**:
This PR adds the step to convert the datastore name in form of a folder/cluster to absolute datastore name.
**Which issue this PR fixes**: fixes vmware#312
**Special notes for your reviewer**:
Testing. All the clustered datastore tests pass now
```
root@k8s-dev-vm-01:~/shahzeb/k8s/kubernetes# export CLUSTER_DATASTORE=dscl1/sharedVmfs-0
root@k8s-dev-vm-01:~/shahzeb/k8s/kubernetes# export VSPHERE_SPBM_POLICY_DS_CLUSTER=gold_cluster
root@k8s-dev-vm-01:~/shahzeb/k8s/kubernetes# go run hack/e2e.go --check-version-skew=false --v --test --test_args='--ginkgo.focus=Volume\sProvisioning\sOn\sClustered\sDatastore'
flag provided but not defined: -check-version-skew
Usage of /tmp/go-build641208048/command-line-arguments/_obj/exe/e2e:
-get
go get -u kubetest if old or not installed (default true)
-old duration
Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/18 17:29:39 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/18 17:29:39 e2e.go:56: Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/18 17:29:39 e2e.go:57: The separator is required to use --get or --old flags
2017/10/18 17:29:39 e2e.go:58: The -- flag separator also suppresses this message
2017/10/18 17:29:39 e2e.go:77: Calling kubetest --check-version-skew=false --v --test --test_args=--ginkgo.focus=Volume\sProvisioning\sOn\sClustered\sDatastore...
2017/10/18 17:29:39 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/18 17:29:39 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 286.302987ms
2017/10/18 17:29:39 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17675+2bfcf4a7d1d6d4-dirty", GitCommit:"2bfcf4a7d1d6d4f257fe7ee96dcb47484b8e3a5f", GitTreeState:"dirty", BuildDate:"2017-10-18T23:37:58Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17663+ba93892625a3e8", GitCommit:"ba93892625a3e8a246f3bc660d56a2635e7e7f9f", GitTreeState:"clean", BuildDate:"2017-10-18T20:57:49Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}
2017/10/18 17:29:40 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 296.318449ms
2017/10/18 17:29:40 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=Volume\sProvisioning\sOn\sClustered\sDatastore
Conformance test: not doing test setup.
Oct 18 17:29:41.634: INFO: Overriding default scale value of zero to 1
Oct 18 17:29:41.634: INFO: Overriding default milliseconds value of zero to 5000
I1018 17:29:41.829967 32645 e2e.go:383] Starting e2e run "98fbc41f-b464-11e7-8e40-0050569c27f6" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508372981 - Will randomize all specs
Will run 3 of 714 specs
Oct 18 17:29:41.889: INFO: >>> kubeConfig: /tmp/kube171.json
Oct 18 17:29:41.894: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 18 17:29:41.928: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 18 17:29:42.057: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 18 17:29:42.057: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 18 17:29:42.063: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 18 17:29:42.063: INFO: Dumping network health container logs from all nodes...
Oct 18 17:29:42.076: INFO: Client version: v1.6.0-alpha.0.17675+2bfcf4a7d1d6d4-dirty
Oct 18 17:29:42.079: INFO: Server version: v1.6.0-alpha.0.17663+ba93892625a3e8
SSSS
------------------------------
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
verify static provisioning on clustered datastore
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:70
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 18 17:29:42.079: INFO: >>> kubeConfig: /tmp/kube171.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:50
[It] verify static provisioning on clustered datastore
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:70
STEP: creating a test vsphere volume
W1018 17:29:43.087373 32645 datacenter.go:156] QueryVirtualDiskUuid failed for diskPath: "[sharedVmfs-0] kubevols/kube-dummyDisk.vmdk". err: ServerFaultCode: File [sharedVmfs-0] kubevols/kube-dummyDisk.vmdk was not found
STEP: Creating pod
STEP: Waiting for pod to be ready
STEP: Verifying volume is attached
STEP: Deleting pod
Oct 18 17:30:07.268: INFO: Deleting pod "vsphere-e2e-vtzbg" in namespace "e2e-tests-volume-provision-bchh4"
Oct 18 17:30:07.301: INFO: Wait up to 5m0s for pod "vsphere-e2e-vtzbg" to be fully deleted
STEP: Waiting for volumes to be detached from the node
Oct 18 17:30:49.429: INFO: Volume "[dscl1/sharedVmfs-0] kubevols/e2e-vmdk-e2e-tests-volume-provision-bchh4.vmdk" appears to have successfully detached from "kubernetes-node4".
STEP: Deleting the vsphere volume
[AfterEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 18 17:30:49.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-volume-provision-bchh4" for this suite.
Oct 18 17:30:58.203: INFO: namespace: e2e-tests-volume-provision-bchh4, resource: bindings, ignored listing per whitelist
Oct 18 17:30:58.225: INFO: namespace e2e-tests-volume-provision-bchh4 deletion completed in 8.444766463s
• [SLOW TEST:76.146 seconds]
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
verify static provisioning on clustered datastore
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:70
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
verify dynamic provision with spbm policy on clustered datastore
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:131
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 18 17:30:58.231: INFO: >>> kubeConfig: /tmp/kube171.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:50
[It] verify dynamic provision with spbm policy on clustered datastore
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:131
STEP: Creating Storage Class With storage policy params
STEP: Creating PVC using the Storage Class
STEP: Waiting for claim to be in bound phase
Oct 18 17:30:58.402: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-6dwrm to have phase Bound
Oct 18 17:30:58.410: INFO: PersistentVolumeClaim pvc-6dwrm found but phase is Pending instead of Bound.
Oct 18 17:31:00.416: INFO: PersistentVolumeClaim pvc-6dwrm found but phase is Pending instead of Bound.
Oct 18 17:31:02.430: INFO: PersistentVolumeClaim pvc-6dwrm found and phase=Bound (4.027818895s)
STEP: Creating pod to attach PV to the node
STEP: Verify the volume is accessible and available in the pod
Oct 18 17:31:26.666: INFO: Running '/root/shahzeb/k8s/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://10.160.141.5 --kubeconfig=/tmp/kube171.json exec pvc-tester-hgcj6 --namespace=e2e-tests-volume-provision-sfbcf -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 18 17:31:27.171: INFO: stderr: ""
Oct 18 17:31:27.171: INFO: stdout: ""
STEP: Deleting pod
Oct 18 17:31:27.171: INFO: Deleting pod "pvc-tester-hgcj6" in namespace "e2e-tests-volume-provision-sfbcf"
Oct 18 17:31:27.202: INFO: Wait up to 5m0s for pod "pvc-tester-hgcj6" to be fully deleted
STEP: Waiting for volumes to be detached from the node
Oct 18 17:32:11.324: INFO: Volume "[sharedVmfs-0] kubevols/kubernetes-dynamic-pvc-c6d8e174-b464-11e7-8b86-005056b7d4e9.vmdk" appears to have successfully detached from "kubernetes-node2".
Oct 18 17:32:11.324: INFO: Deleting PersistentVolumeClaim "pvc-6dwrm"
[AfterEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 18 17:32:11.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-volume-provision-sfbcf" for this suite.
Oct 18 17:32:19.867: INFO: namespace: e2e-tests-volume-provision-sfbcf, resource: bindings, ignored listing per whitelist
Oct 18 17:32:19.876: INFO: namespace e2e-tests-volume-provision-sfbcf deletion completed in 8.460287539s
• [SLOW TEST:81.645 seconds]
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
verify dynamic provision with spbm policy on clustered datastore
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:131
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
verify dynamic provision with default parameter on clustered datastore
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:121
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 18 17:32:19.878: INFO: >>> kubeConfig: /tmp/kube171.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:50
[It] verify dynamic provision with default parameter on clustered datastore
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:121
STEP: Creating Storage Class With storage policy params
STEP: Creating PVC using the Storage Class
STEP: Waiting for claim to be in bound phase
Oct 18 17:32:20.024: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-zrcmm to have phase Bound
Oct 18 17:32:20.034: INFO: PersistentVolumeClaim pvc-zrcmm found but phase is Pending instead of Bound.
Oct 18 17:32:22.040: INFO: PersistentVolumeClaim pvc-zrcmm found and phase=Bound (2.016093024s)
STEP: Creating pod to attach PV to the node
STEP: Verify the volume is accessible and available in the pod
Oct 18 17:32:30.266: INFO: Running '/root/shahzeb/k8s/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://10.160.141.5 --kubeconfig=/tmp/kube171.json exec pvc-tester-2hns4 --namespace=e2e-tests-volume-provision-llvbq -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 18 17:32:30.774: INFO: stderr: ""
Oct 18 17:32:30.774: INFO: stdout: ""
STEP: Deleting pod
Oct 18 17:32:30.774: INFO: Deleting pod "pvc-tester-2hns4" in namespace "e2e-tests-volume-provision-llvbq"
Oct 18 17:32:30.806: INFO: Wait up to 5m0s for pod "pvc-tester-2hns4" to be fully deleted
STEP: Waiting for volumes to be detached from the node
Oct 18 17:33:20.972: INFO: Volume "[dscl1/sharedVmfs-0] kubevols/kubernetes-dynamic-pvc-f77fba36-b464-11e7-8b86-005056b7d4e9.vmdk" appears to have successfully detached from "kubernetes-node4".
Oct 18 17:33:20.972: INFO: Deleting PersistentVolumeClaim "pvc-zrcmm"
[AfterEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 18 17:33:21.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-volume-provision-llvbq" for this suite.
Oct 18 17:33:29.490: INFO: namespace: e2e-tests-volume-provision-llvbq, resource: bindings, ignored listing per whitelist
Oct 18 17:33:29.554: INFO: namespace e2e-tests-volume-provision-llvbq deletion completed in 8.444585266s
• [SLOW TEST:69.676 seconds]
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
verify dynamic provision with default parameter on clustered datastore
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:121
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 18 17:33:29.559: INFO: Running AfterSuite actions on all node
Oct 18 17:33:29.559: INFO: Running AfterSuite actions on node 1
Ran 3 of 714 Specs in 227.670 seconds
SUCCESS! -- 3 Passed | 0 Failed | 0 Pending | 711 Skipped PASS
Ginkgo ran 1 suite in 3m48.395491704s
Test Suite Passed
```
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 54366, 54500). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Service controller / gce loadbalancer logging cleanup
**What this PR does / why we need it**:
From https://github.com/kubernetes/kubernetes/issues/52495, while looking into the potential scalability issue service controller may have, I found it really hard to debug as the log sometimes doesn't reference which LB it is for, or sometimes it just doesn't log at all. It get even harder that in correctness CIs we reduce verbose level to v1:
67bc30718f/jobs/env/ci-kubernetes-e2e-gce-scale-correctness.env (L16).
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #NONE
**Special notes for your reviewer**:
/assign @nicksardo @bowei
cc @shyamjvs
**Release note**:
```release-note
NONE
```
This fixes some scope isses that were introduced by shadowing vars inside anonymous functions as well as using a naked return.
Fixed by using unique err names and explicitly returning errors.
Additional improvement is using the HomeDir() util function provided by client-go instead of including a helper function at the bottom of this example.
Signed-off-by: John Kelly <jekohk@gmail.com>
This updates the create-update-delete-deployment example with the following:
Make use of client-go retry util in Update() steps instead of simple for loops.
Using RetryOnConflict is generally better practice as it won't become stuck in a retry loop and uses exponential backoff to prevent exhausting the apiserver.
Instead of changing annotations to demonstrate Updates/Rollbacks, change the container image as it is less confusing for readers and a better real-world example.
Improve comments and README to reflect above changes.
Signed-off-by: John Kelly <jekohk@gmail.com>
This updates the create-update-delete-deployment example with the following:
Add rollback step to demonstrate rolling back deployments with client-go.
Modify the for-loops used in both Update steps to Get() the latest version
of the Deployment from the server before attempting Update().
This is necessary because the object returned by Create() does
not have the new resourceVersion, causing the initial Update() to always fail
due to conflicting resource versions. Putting the Get() at the top of the
loop seems to fix this bug.
Make -kubeconfig flag optional if config is in default location, using the
same method found in the out-of-cluster example.
Patch is motivated by effort to improve client-go examples.
Signed-off-by: John Kelly <jekohk@gmail.com>
Automatic merge from submit-queue (batch tested with PRs 54107, 54184, 54377, 54094, 54111). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
add e2e tests for NEG
This PR includes tests:
- ingress conformance test
- scaling up and down backends
- switching backend between IG and NEG
- rolling update backend should not cause service disruption
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54107, 54184, 54377, 54094, 54111). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
exit with correct exit code on plugin failure
**Release note**:
```release-note
NONE
```
Correctly exits after a plugin command execution failure with the correct exit code.
Related downstream bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1501756
**Before**
```
$ kubectl plugin will-fail-plugin
ls: cannot access 'does-not-exist': No such file or directory
error: exit status 2
$ echo $?
1
```
**After**
```
$ kubectl plugin will-fail-plugin
ls: cannot access 'does-not-exist': No such file or directory
error: exit status 2
$ echo $?
2
```
cc @fabianofranz
Automatic merge from submit-queue (batch tested with PRs 54107, 54184, 54377, 54094, 54111). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix detach metric flake by not using exact equals
Also poll for detach value increase.
Fixes https://github.com/kubernetes/kubernetes/issues/52871
I have ran these tests for more than 3 hours in a tight loop and did not see it flake. The changes here include dropping exact equality test and making sure we poll for increase in detach metric count.
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 54107, 54184, 54377, 54094, 54111). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix retry logic in service controller
**What this PR does / why we need it**: Make service controller don't retry on doNotRetry service update failure.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#54183
**Special notes for your reviewer**:
/assign @nicksardo @bowei
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54107, 54184, 54377, 54094, 54111). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Set "--kubelet-preferred-address-types" if ssh tunnel is not to be used.
In additional don't advertise external address and allow the internal address
to be advertised.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
While moving device_plugin_handler_test.go from pkg/kubelet/cm/ to
pkg/kubelet/cm/deviceplugin/, we can no longer uses cm in its tests
because that would cause a cycle dependency. To solve this problem,
I moved the main cm GetResources functionality as well as part of the
current device plugin handler Allocate functionality into a new device
plugin handler function, GetDeviceRunContainerOptions(). This
refactoring is also needed by another PR 51895 that moves device
allocation into admission phase. Now device plugin handler Allocate()
first checks whether there is cached device runtime state and only
issues Allocate grpc call if there is no cached state available.
The new GetDeviceRunContainerOptions() function simply returns device
runtime config from the cached state. To support this change, extended the
podDevices struct and checkpoint data structure with device runtime state.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
[cluster/log-dump] bump daemonset version
**What this PR does / why we need it**:
bump daemonset version
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```NONE
```
Automatic merge from submit-queue (batch tested with PRs 54270, 54479). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Delete maxNumPVs and maxNumPVCs const of persistent_volume.go
**What this PR does / why we need it**:
the two const of maxNumPVs and maxNumPVCs haven't used and delete it!
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54270, 54479). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix kubectl set command
**What this PR does / why we need it**:
1. fix some errors in 'kubectl set' series command
2.remove unused function in kubectl set env command
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```
NONE
```
As part of the effort to move in-tree cloud providers out of kubernetes
main repository, we have identified that kube apiserver should stop
using --cloud-provider and --cloud-config parameters. One of the main
users of the parameters above is the SSH Tunneling functionality which
is used only in the GCE scenarios. We need to deprecate these flags
now and remove them in a year per discussion on mailing list.
With this change, `ssh-user` and `ssh-keyfile` are now considered deprecated
and we can remove it in the future. This means that SSH tunnel functionality
used in Google Container Engine for the Master -> Cluster communication
will no longer be available in the future.
Automatic merge from submit-queue (batch tested with PRs 53820, 53971). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
reopen#50354 about install_test.go of error message amending
**What this PR does / why we need it**:
because of unknown repository,I can not update it in before branch,so I reopen the pr #50354
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
cc @irfanurrehman
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53820, 53971). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add support for RBAC support to Kubernetes via Juju
**What this PR does / why we need it**: This PR add RBAC to the Juju deployment of Kubernetes
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
**Special notes for your reviewer**:
**Release note**:
```Canonical Distribution of Kubernetes offers configurable RBAC
```
Automatic merge from submit-queue (batch tested with PRs 54229, 54380, 54302, 54454). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
RBD Plugin: rbdStatus only check output of successful `rbd status` run
**What this PR does / why we need it**:
Current RBDUtil.rbdStatus implementation never return error in any cases, even if command not found or `rbd status` never succeed. This PR change it to only check output of successful `rbd status` run, and return error on other cases.
Because there are maybe network problem or ceph cluster unresponsive conditions which will cause `rbd status` command to fail. We cannot assume there is no watchers on given image. It's better to return error, and let the caller to decide what to do.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54229, 54380, 54302, 54454). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Refactor RBD volume
Refactor RBD Volume Persistent Volume Spec so RBD PV's SecretRef
allows referencing a secret from a persistent volume in any namespace.
This allows locating credentials for persistent volumes in namespaces
other than the one containing the PVC.
Closes#54432
```release-note
RBD Persistent Volume Sources can now reference User's Secret in namespaces other than the namespace of the bound Persistent Volume Claim
```
Automatic merge from submit-queue (batch tested with PRs 54229, 54380, 54302, 54454). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Delete redundant err check
**What this PR does / why we need it**:
This process is redundant.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54229, 54380, 54302, 54454). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Speed up volume tests by reducing pod grace period
busybox based pods don't react to docker stop nicely. By reducing the pod grace period we can save ~29 seconds per volume test.
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Volunteer to be reviewer of NodeController.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes # N/A
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 53579, 47347). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
actually check for a live discovery endpoint before aggregating
Aggregation doesn't work without being able to hit a discovery endpoint. This adds a simple status check for whether the discovery endpoint is reachable. The actual status code doesn't matter, we just need to be able to make the connection.
Automatic merge from submit-queue (batch tested with PRs 52479, 53956). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Register sync proxy rules latency metrics in app level
**What this PR does / why we need it**:
IMO, should may should register proxy metrics in app level instead of in specific proxy mode, e.g. iptables, ipvs, winkernel...
By registering sync proxy rules latency metrics in app level, we can reuse codes among different proxiers.
**Which issue this PR fixes**:
closes#53957
**Special notes for your reviewer**:
@wojtek-t What do you think about it?
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52479, 53956). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Update scheduler to use schedulerName selector
**What this PR does / why we need it**:
Update scheduler to use schedulerName selector when select pods in the podInformer
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52254
**Special notes for your reviewer**:
**Release note**:
```
None
```