Automatic merge from submit-queue (batch tested with PRs 55217, 54260). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Unit tests for Azure service session affinity
**What this PR does / why we need it**: We added session affinity support in the Azure load balancer in commit 8b50b83067. This PR adds unit tests for this behaviour.
**Which issue this PR fixes**: None
**Special notes for your reviewer**: None
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55233, 55927, 55903, 54867, 55940). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix azure disk storage account init issue
**What this PR does / why we need it**:
There are two issues for the original azure disk storage account initialaztion code:
1) wrong controller-master detection, see issue #54570, #55776
2) should not initialize two storage account even if it's not necessary, see issue #50883
This PR would fix the above two issues:
For 1: remove the controller-master process binding
For 2: remove the storage account initialization process, just create on demand
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#54570Fixes#55776Fixes#50883
**Special notes for your reviewer**:
@rootfs @karataliu
**Release note**:
```
fix azure disk storage account init issue
```
/sig azure
Automatic merge from submit-queue (batch tested with PRs 50457, 55558, 53483, 55731, 52842). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Ensure new tags are created on existing ELBs
**What this PR does / why we need it**:
When editing an existing service of type LoadBalancer in an AWS environment and adding the `service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags` annotation, you would expect the new tags to be set on the load balancer, however this doesn't happen currently. The annotation only takes effect if specified when the service is created.
This PR adds an AddTags method to the ELB interface and uses this to ensure tags set in the annotation are present on the ELB. If the tag key is already present, the value will be updated.
This PR does not remove tags that have been removed from the annotation, it only add/updates tags.
**Which issue(s) this PR fixes**:
Fixes#54642
**Special notes for your reviewer**:
The change requires that the IAM policy of the master instance(s) has the `elasticloadbalancing:AddTags` permission.
**Release note**:
```release-note
Ensure additional resource tags are set/updated AWS load balancers
```
Automatic merge from submit-queue (batch tested with PRs 50457, 55558, 53483, 55731, 52842). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Apply taint when a volume is stuck in attaching state
When a volume is stuck in attaching state for too long on a node, it is best to make node unschedulable so as any other pod may not be scheduled on it.
Fixes https://github.com/kubernetes/kubernetes/issues/55502
```release-note
AWS: Apply taint to a node if volumes being attached to it are stuck in attaching state
```
Automatic merge from submit-queue (batch tested with PRs 55642, 55897, 55835, 55496, 55313). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
OpenStack: fetch volume path from metadata service
**What this PR does / why we need it**:
Updates the OpenStack cloud provider to use the Nova metadata service as a fallback when retrieving mounted PV disk paths. Note that the Nova instance device metadata will contain the disk address and bus, which allows finding its path.
This is needed as the *standard* mechanism of retrieving disk paths is not available when running k8s under OpenStack Hyper-V hosts.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#55312
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54134, 54507). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Added service annotation for AWS ELB SSL policy
**What this PR does / why we need it**:
This work adds a new supported service annotation for AWS clusters, `service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy`, which lets users specify which [predefined AWS SSL policy](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) they would like to use.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes#43744
**Special notes for your reviewer**:
While this PR doesn't allow users to define their own cipher policy in an annotation, a user could (out of band) create their own policy on an ELB with the naming convention `k8s-SSLNegotiationPolicy-<my-policy-name>` and specify it with the above annotation.
This is my second k8s PR, and I don't have experience with an e2e test, would that be required for this change? I did run this in a kubeadm cluster and it worked like a charm. I was able to choose different predefined policies, and revert to the default policy when I removed the annotation.
**Release note**:
```release-note
Added service annotation for AWS ELB SSL policy
```
Automatic merge from submit-queue (batch tested with PRs 55392, 55491, 51914, 55831, 55836). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix dangling attach errors
Detach volumes from shutdown nodes and ensure that
dangling volumes are handled correctly in AWS
Fixes https://github.com/kubernetes/kubernetes/issues/52573
```release-note
Implement correction mechanism for dangling volumes attached for deleted pods
```
Automatic merge from submit-queue (batch tested with PRs 55594, 47849, 54692, 55478, 54133). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Added service annotation to set Azure DNS label for public IP
**What this PR does / why we need it**: Added a feature to set the DNS label for public IPs in the Azure cloud.
For example:
```
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/label-name: myservice
...
```
Will resolve myservice.westus.cloudapp.azure.com to the service's IP.
**Which issue this PR fixes**: fixes#44775
**Special notes for your reviewer**: Note that this is defining a new annotation, so feel free to point out if there is a preferred convention or anything else that needs to be done.
**Release note**:
```release-note
New service annotation "service.beta.kubernetes.io/azure-dns-label-name" to set Azure DNS label name for public IP
```
The OpenStack cloud provider retrieves mounted Cinder volume paths
by looking in /dev/disk/by-id, expecting the disk serial IDs (e.g.
SCSI ID) to include the volume ID.
The issue is that not all hypervisors are able to expose this.
For example, Hyper-V will just preserve the original Cinder volume
lun SCSI ID (without setting the volume id). For this reason,
disk path lookups will fail.
In order to be able to leverage Hyper-V based OpenStack providers,
as a fallback, we're querying the metadata service, searching for
disk device metadata. Note that starting with Nova Queens, the Hyper-V
driver always provides disk address information through the instance
metadata.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Restrict Azure NSG rules to allow external access only to load balancer IP
**What this PR does / why we need it**: On Azure, we create NSG (Network Security Group) rules on the vnet to allow external clients to access services exposed as type LoadBalancer. At the moment, these rules have a destination of `Any`, which means that they will permit requests on the opened port to any IP within the vnet. This PR restricts the security rules so that they admit external access only to the load balancer IP.
**Which issue this PR fixes**: None in upstream - reported as https://github.com/Azure/acs-engine/issues/1619
**Special notes for your reviewer**: None
**Release note**:
```release-note
Azure NSG rules for services exposed via external load balancer
now limit the destination IP address to the relevant front end load
balancer IP.
```
Automatic merge from submit-queue (batch tested with PRs 55093, 54966, 55047, 54971, 54786). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Upgrade Azure SDK to v11.1.1
**What this PR does / why we need it**: This fixes various Azure SDK bugs per the Azure SDK for Go changelogs:
* Fixed bug in which blob types were unmarshaled incorrectly
* Fixed various package names
* Miscellaneous unspecified storage bug fixes
This is also a prerequisite for a bug fix for running out of firewall rules when exposing large numbers of services from an Azure cluster.
**Which issue(s) this PR fixes**: None
**Special notes for your reviewer**:
1. I inadvertently committed a compatibility fix along with the dependency upgrade (which the guidelines say should have been two separate commits). The offending file is `pkg/cloudprovider/providers/azure.go`.
2. We require an urgent bug fix for the firewall rules limit so it would be great if we could get this agreed quickly. I have struggled with the dependency upgrade process a bit so if it looks wrong, please let me know as soon as you can! Thanks!
**Release note**:
```release-note
Upgraded Azure SDK to v11.1.1.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove Google Cloud KMS's in-tree integration
Removes the following introduced by #48574 and others:
* `kms.go` which contained the cloudkms-specific code for Google Cloud KMS service.
* Registering the Google Cloud KMS in the KMS plugin registry.
* Google's `cloudkms` API package from `vendor` folder.
The following changes are upcoming:
* Removal of KMSPluginRegistry. This would not be needed anymore, since KMS providers will be out-of-tree from now on (so no need of registering them, an address of the process would be enough).
* A service which allows encrypt/decrypt functionality (satisfies `envelope.Service` interface) if initialized with an IP/Port of an out-of-tree process serving KMS requests. Will tentatively use gRPC requests to talk to this external service.
Reference: https://github.com/kubernetes/kubernetes/pull/54439#issuecomment-340062801 and https://github.com/kubernetes/kubernetes/issues/51965#issuecomment-339333937.
```release-note
Google KMS integration was removed from in-tree in favor of a out-of-process extension point that will be used for all KMS providers.
```
We should check for available volume before performing
attach or delete of EBS volume. This will make sure that
we do not blow up API quota of mutable operations in AWS and stay a
good citizen.
Automatic merge from submit-queue (batch tested with PRs 51409, 54616). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Implement InstanceExistsByProviderID() for cloud providers
Fix#51406
If cloud providers(like aws, gce etc...) implement ExternalID()
and support getting instance by ProviderID , they also implement
InstanceExistsByProviderID().
/assign wlan0
/assign @luxas
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52717, 54568, 54452, 53997, 54237). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
[OpenStack]Remove the LbaasV1 of OpenStack cloud provider
The Neutron LbaasV1 has been declared obsolete, LbaasV2 is a
better choice.
So let's remove the codes of LbaasV1, only support LbaasV2.
xref: #52609
Reference OpenStack doc:
https://docs.openstack.org/mitaka/networking-guide/config-lbaas.html
**Special notes for your reviewer**:
/assign @dims
/assign @anguslees
**Release note**:
```release-note
Remove the LbaasV1 of OpenStack cloud provider, currently only support LbaasV2.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fixing usage of clustered datastore name to be absolute datastore name
**What this PR does / why we need it**:
This PR adds the step to convert the datastore name in form of a folder/cluster to absolute datastore name.
**Which issue this PR fixes**: fixes vmware#312
**Special notes for your reviewer**:
Testing. All the clustered datastore tests pass now
```
root@k8s-dev-vm-01:~/shahzeb/k8s/kubernetes# export CLUSTER_DATASTORE=dscl1/sharedVmfs-0
root@k8s-dev-vm-01:~/shahzeb/k8s/kubernetes# export VSPHERE_SPBM_POLICY_DS_CLUSTER=gold_cluster
root@k8s-dev-vm-01:~/shahzeb/k8s/kubernetes# go run hack/e2e.go --check-version-skew=false --v --test --test_args='--ginkgo.focus=Volume\sProvisioning\sOn\sClustered\sDatastore'
flag provided but not defined: -check-version-skew
Usage of /tmp/go-build641208048/command-line-arguments/_obj/exe/e2e:
-get
go get -u kubetest if old or not installed (default true)
-old duration
Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/18 17:29:39 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/18 17:29:39 e2e.go:56: Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/18 17:29:39 e2e.go:57: The separator is required to use --get or --old flags
2017/10/18 17:29:39 e2e.go:58: The -- flag separator also suppresses this message
2017/10/18 17:29:39 e2e.go:77: Calling kubetest --check-version-skew=false --v --test --test_args=--ginkgo.focus=Volume\sProvisioning\sOn\sClustered\sDatastore...
2017/10/18 17:29:39 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/18 17:29:39 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 286.302987ms
2017/10/18 17:29:39 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17675+2bfcf4a7d1d6d4-dirty", GitCommit:"2bfcf4a7d1d6d4f257fe7ee96dcb47484b8e3a5f", GitTreeState:"dirty", BuildDate:"2017-10-18T23:37:58Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17663+ba93892625a3e8", GitCommit:"ba93892625a3e8a246f3bc660d56a2635e7e7f9f", GitTreeState:"clean", BuildDate:"2017-10-18T20:57:49Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}
2017/10/18 17:29:40 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 296.318449ms
2017/10/18 17:29:40 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=Volume\sProvisioning\sOn\sClustered\sDatastore
Conformance test: not doing test setup.
Oct 18 17:29:41.634: INFO: Overriding default scale value of zero to 1
Oct 18 17:29:41.634: INFO: Overriding default milliseconds value of zero to 5000
I1018 17:29:41.829967 32645 e2e.go:383] Starting e2e run "98fbc41f-b464-11e7-8e40-0050569c27f6" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508372981 - Will randomize all specs
Will run 3 of 714 specs
Oct 18 17:29:41.889: INFO: >>> kubeConfig: /tmp/kube171.json
Oct 18 17:29:41.894: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 18 17:29:41.928: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 18 17:29:42.057: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 18 17:29:42.057: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 18 17:29:42.063: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 18 17:29:42.063: INFO: Dumping network health container logs from all nodes...
Oct 18 17:29:42.076: INFO: Client version: v1.6.0-alpha.0.17675+2bfcf4a7d1d6d4-dirty
Oct 18 17:29:42.079: INFO: Server version: v1.6.0-alpha.0.17663+ba93892625a3e8
SSSS
------------------------------
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
verify static provisioning on clustered datastore
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:70
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 18 17:29:42.079: INFO: >>> kubeConfig: /tmp/kube171.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:50
[It] verify static provisioning on clustered datastore
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:70
STEP: creating a test vsphere volume
W1018 17:29:43.087373 32645 datacenter.go:156] QueryVirtualDiskUuid failed for diskPath: "[sharedVmfs-0] kubevols/kube-dummyDisk.vmdk". err: ServerFaultCode: File [sharedVmfs-0] kubevols/kube-dummyDisk.vmdk was not found
STEP: Creating pod
STEP: Waiting for pod to be ready
STEP: Verifying volume is attached
STEP: Deleting pod
Oct 18 17:30:07.268: INFO: Deleting pod "vsphere-e2e-vtzbg" in namespace "e2e-tests-volume-provision-bchh4"
Oct 18 17:30:07.301: INFO: Wait up to 5m0s for pod "vsphere-e2e-vtzbg" to be fully deleted
STEP: Waiting for volumes to be detached from the node
Oct 18 17:30:49.429: INFO: Volume "[dscl1/sharedVmfs-0] kubevols/e2e-vmdk-e2e-tests-volume-provision-bchh4.vmdk" appears to have successfully detached from "kubernetes-node4".
STEP: Deleting the vsphere volume
[AfterEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 18 17:30:49.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-volume-provision-bchh4" for this suite.
Oct 18 17:30:58.203: INFO: namespace: e2e-tests-volume-provision-bchh4, resource: bindings, ignored listing per whitelist
Oct 18 17:30:58.225: INFO: namespace e2e-tests-volume-provision-bchh4 deletion completed in 8.444766463s
• [SLOW TEST:76.146 seconds]
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
verify static provisioning on clustered datastore
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:70
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
verify dynamic provision with spbm policy on clustered datastore
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:131
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 18 17:30:58.231: INFO: >>> kubeConfig: /tmp/kube171.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:50
[It] verify dynamic provision with spbm policy on clustered datastore
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:131
STEP: Creating Storage Class With storage policy params
STEP: Creating PVC using the Storage Class
STEP: Waiting for claim to be in bound phase
Oct 18 17:30:58.402: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-6dwrm to have phase Bound
Oct 18 17:30:58.410: INFO: PersistentVolumeClaim pvc-6dwrm found but phase is Pending instead of Bound.
Oct 18 17:31:00.416: INFO: PersistentVolumeClaim pvc-6dwrm found but phase is Pending instead of Bound.
Oct 18 17:31:02.430: INFO: PersistentVolumeClaim pvc-6dwrm found and phase=Bound (4.027818895s)
STEP: Creating pod to attach PV to the node
STEP: Verify the volume is accessible and available in the pod
Oct 18 17:31:26.666: INFO: Running '/root/shahzeb/k8s/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://10.160.141.5 --kubeconfig=/tmp/kube171.json exec pvc-tester-hgcj6 --namespace=e2e-tests-volume-provision-sfbcf -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 18 17:31:27.171: INFO: stderr: ""
Oct 18 17:31:27.171: INFO: stdout: ""
STEP: Deleting pod
Oct 18 17:31:27.171: INFO: Deleting pod "pvc-tester-hgcj6" in namespace "e2e-tests-volume-provision-sfbcf"
Oct 18 17:31:27.202: INFO: Wait up to 5m0s for pod "pvc-tester-hgcj6" to be fully deleted
STEP: Waiting for volumes to be detached from the node
Oct 18 17:32:11.324: INFO: Volume "[sharedVmfs-0] kubevols/kubernetes-dynamic-pvc-c6d8e174-b464-11e7-8b86-005056b7d4e9.vmdk" appears to have successfully detached from "kubernetes-node2".
Oct 18 17:32:11.324: INFO: Deleting PersistentVolumeClaim "pvc-6dwrm"
[AfterEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 18 17:32:11.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-volume-provision-sfbcf" for this suite.
Oct 18 17:32:19.867: INFO: namespace: e2e-tests-volume-provision-sfbcf, resource: bindings, ignored listing per whitelist
Oct 18 17:32:19.876: INFO: namespace e2e-tests-volume-provision-sfbcf deletion completed in 8.460287539s
• [SLOW TEST:81.645 seconds]
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
verify dynamic provision with spbm policy on clustered datastore
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:131
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
verify dynamic provision with default parameter on clustered datastore
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:121
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 18 17:32:19.878: INFO: >>> kubeConfig: /tmp/kube171.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:50
[It] verify dynamic provision with default parameter on clustered datastore
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:121
STEP: Creating Storage Class With storage policy params
STEP: Creating PVC using the Storage Class
STEP: Waiting for claim to be in bound phase
Oct 18 17:32:20.024: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-zrcmm to have phase Bound
Oct 18 17:32:20.034: INFO: PersistentVolumeClaim pvc-zrcmm found but phase is Pending instead of Bound.
Oct 18 17:32:22.040: INFO: PersistentVolumeClaim pvc-zrcmm found and phase=Bound (2.016093024s)
STEP: Creating pod to attach PV to the node
STEP: Verify the volume is accessible and available in the pod
Oct 18 17:32:30.266: INFO: Running '/root/shahzeb/k8s/kubernetes/_output/dockerized/bin/linux/amd64/kubectl --server=https://10.160.141.5 --kubeconfig=/tmp/kube171.json exec pvc-tester-2hns4 --namespace=e2e-tests-volume-provision-llvbq -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 18 17:32:30.774: INFO: stderr: ""
Oct 18 17:32:30.774: INFO: stdout: ""
STEP: Deleting pod
Oct 18 17:32:30.774: INFO: Deleting pod "pvc-tester-2hns4" in namespace "e2e-tests-volume-provision-llvbq"
Oct 18 17:32:30.806: INFO: Wait up to 5m0s for pod "pvc-tester-2hns4" to be fully deleted
STEP: Waiting for volumes to be detached from the node
Oct 18 17:33:20.972: INFO: Volume "[dscl1/sharedVmfs-0] kubevols/kubernetes-dynamic-pvc-f77fba36-b464-11e7-8b86-005056b7d4e9.vmdk" appears to have successfully detached from "kubernetes-node4".
Oct 18 17:33:20.972: INFO: Deleting PersistentVolumeClaim "pvc-zrcmm"
[AfterEach] [sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 18 17:33:21.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-volume-provision-llvbq" for this suite.
Oct 18 17:33:29.490: INFO: namespace: e2e-tests-volume-provision-llvbq, resource: bindings, ignored listing per whitelist
Oct 18 17:33:29.554: INFO: namespace e2e-tests-volume-provision-llvbq deletion completed in 8.444585266s
• [SLOW TEST:69.676 seconds]
[sig-storage] Volume Provisioning On Clustered Datastore [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
verify dynamic provision with default parameter on clustered datastore
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_cluster_ds.go:121
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 18 17:33:29.559: INFO: Running AfterSuite actions on all node
Oct 18 17:33:29.559: INFO: Running AfterSuite actions on node 1
Ran 3 of 714 Specs in 227.670 seconds
SUCCESS! -- 3 Passed | 0 Failed | 0 Pending | 711 Skipped PASS
Ginkgo ran 1 suite in 3m48.395491704s
Test Suite Passed
```
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 54366, 54500). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Service controller / gce loadbalancer logging cleanup
**What this PR does / why we need it**:
From https://github.com/kubernetes/kubernetes/issues/52495, while looking into the potential scalability issue service controller may have, I found it really hard to debug as the log sometimes doesn't reference which LB it is for, or sometimes it just doesn't log at all. It get even harder that in correctness CIs we reduce verbose level to v1:
67bc30718f/jobs/env/ci-kubernetes-e2e-gce-scale-correctness.env (L16).
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #NONE
**Special notes for your reviewer**:
/assign @nicksardo @bowei
cc @shyamjvs
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Make OpenStack LBaaS v2 Provider configurable
Add option 'lb-provider' to the Loadbalancer section of the OpenStack
cloudprovider configuration to allow using a different LBaaS v2
provider than the default.
**What this PR does / why we need it**:
This PR allows to use a different OpenStack LBaaS v2 provider than the default of the OpenStack cloud.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Added option lb-provider to OpenStack cloud provider config
```
Add option 'lb-provider' to the Loadbalancer section of the OpenStack
cloudprovider configuration to allow using a different LBaaS v2
provider than the default.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Update the Google API clients used by the GCE provider to identify Kubernetes as the origin of GCP API calls.
**What this PR does / why we need it**:
This modifies the Google API clients used for the GCE provider to identify GCP API calls as originating from Kubernetes.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#39391
**Special notes for your reviewer**:
**Release note**:
```release-note
`NONE`
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Let the caller log error message
Talk to @anguslees offline and in resize pr, we should let the caller log the error message and in those functions, just return err.
Also error string should not be capitalized or end with punctuation
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
/assign @anguslees
Fix#51406
If cloud providers(like aws, gce etc...) implement ExternalID()
and support getting instance by ProviderID , they also implement
InstanceExistsByProviderID().
Automatic merge from submit-queue (batch tested with PRs 53694, 53919). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix controller manager crash issue on a manually created k8s cluster
**What this PR does / why we need it**:
fix controller manager crash issue on a manually created k8s cluster, it's due to availability set nil issue in azure loadbalancer
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
In the testing of a manually created k8s cluster, I found controller manager on master would crash in current scenario:
1. Use acs-engine to set up k8s 1.7.7 cluster (it's with an availability set)
2. Manually add a node to the k8s cluster (without an availibity set in this VM)
3. Set up a service and schedule the pod onto this newly added node
4. controller manager would crash on master because although this k8s cluster has an availability set, the newly added node's `machine.AvailabilitySet` is nil which would cause controller manager crash
**Special notes for your reviewer**:
@brendanburns @karataliu @JiangtianLi
**Release note**:
```
fix controller manager crash issue on a manually created k8s cluster
```
/sig azure