Automatic merge from submit-queue
Support persistent volume usage for kubernetes running on Photon Controller platform
**What this PR does / why we need it:**
Enable the persistent volume usage for kubernetes running on Photon platform.
Photon Controller: https://vmware.github.io/photon-controller/
_Only the first commit include the real code change.
The following commits are for third-party vendor dependency and auto-generated code/docs updating._
Two components are added:
pkg/cloudprovider/providers/photon: support Photon Controller as cloud provider
pkg/volume/photon_pd: support Photon persistent disk as volume source for persistent volume
Usage introduction:
a. Photon Controller is supported as cloud provider.
When choosing to use photon controller as a cloud provider, "--cloud-provider=photon --cloud-config=[path_to_config_file]" is required for kubelet/kube-controller-manager/kube-apiserver. The config file of Photon Controller should follow the following usage:
```
[Global]
target = http://[photon_controller_endpoint_IP]
ignoreCertificate = true
tenant = [tenant_name]
project = [project_name]
overrideIP = true
```
b. Photon persistent disk is supported as volume source/persistent volume source.
yaml usage:
```
volumes:
- name: photon-storage-1
photonPersistentDisk:
pdID: "643ed4e2-3fcc-482b-96d0-12ff6cab2a69"
```
pdID is the persistent disk ID from Photon Controller.
c. Enable Photon Controller as volume provisioner.
yaml usage:
```
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: gold_sc
provisioner: kubernetes.io/photon-pd
parameters:
flavor: persistent-disk-gold
```
The flavor "persistent-disk-gold" needs to be created by Photon platform admin before hand.
Automatic merge from submit-queue
cloudprovider/cloudstack: Fix a bug where we assume IP addresses instead of a hostnames
Because of how our test environment was setup, we didn’t notice that we were assuming the load balancer hosts list to always be IP addresses, while they actually are hostnames.
So without this PR, the load balancer code will not work as expected as it will not be able to find the nodes that need to be load balanced.
Also updated some comments and added a check to prevent trying to release a public IP if we don’t have one.
Automatic merge from submit-queue
azure: loadbalancer rules use DSR
**What this PR does / why we need it**:
Enables "direct server return" on the load balancer in Azure, which causes the DIP to be preserved when traffic goes through the load balancer. This enables service traffic to go to the Service Port rather than having to go through the NodePort.
**Special notes for your reviewer**:
N/A.
**Tested with...**:
```shell
kubectl run nginx --image=nginx
kubectl run nginx2 --image=nginx
kubectl expose deployment nginx --port=80 --type=LoadBalancer
kubectl expose deployment nginx2 --port=80 --type=LoadBalancer
```
Ensuring that both services got external IPs and that the resources created looked correct.
**Release note**:
```release-note
azure: load balancer preserves destination ip address
```
CC: @brendandburns
Automatic merge from submit-queue
[RFC] Prepare for deprecating NodeLegacyHostIP
Ref https://github.com/kubernetes/kubernetes/issues/9267#issuecomment-257994766
*What this PR does*
- Add comments saying "LegacyHostIP" will be deprecated in 1.7;
- Add v1.NodeLegacyHostIP to be consistent with the internal API (useful for client-go migration #35159)
- Let cloudproviders who used to only set LegacyHostIP set the IP as both InternalIP and ExternalIP
- Master used to ssh tunnel to node's ExternalIP or LegacyHostIP to do [healthz check](https://github.com/kubernetes/kubernetes/blame/master/pkg/master/master.go#L328-L332). OTOH, if on-prem, kubelet only [sets](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/kubelet_node_status.go#L430-L431) LegacyHostIP or InternalIP. In order to deprecate LegacyHostIP in 1.7, I let healthz check to use InternalIP if ExternalIP is not available. (The healthz check is the only consumer of LegacyHostIP in k8s.)
@liggitt @justinsb @bgrant0607
```release-note
LegacyHostIP will be deprecated in 1.7.
```
Automatic merge from submit-queue
Fix LBaaS version detection in openstack cloudprovider
`lbversion` is the local variable used for version detection when `os.lbOpts.LBVersion` is not specified.
xref https://bugzilla.redhat.com/show_bug.cgi?id=1391837
@ncdc @derekwaynecarr @anguslees
Mark NodeLegacyHostIP will be deprecated in 1.7;
Let cloudprovider that used to only set NodeLegacyHostIP set the IP as both InternalIP and ExternalIP, to allow dprecation in 1.7
Automatic merge from submit-queue
Don't rely on device name provided by Cinder
See issue #33128
We can't rely on the device name provided by Cinder, and thus must perform
detection based on the drive serial number (aka It's cinder ID) on the
kubelet itself.
This patch re-works the cinder volume attacher to ignore the supplied
deviceName, and instead defer to the pre-existing GetDevicePath method to
discover the device path based on it's serial number and /dev/disk/by-id
mapping.
This new behavior is controller by a config option, as falling back
to the cinder value when we can't discover a device would risk devices
not showing up, falling back to cinder's guess, and detecting the wrong
disk as attached.
Automatic merge from submit-queue
Corect filtering of OpenStack LBaaS resources to delete
Neutron's API ignores unknown parameters. When listing pools etc, K8
attempts to filter on "LoadBalancerID", which is not a valid filter.
As such, it is ignored by Neutron, and a list of all pools is
returned. K8 then proceeds to delete each of the pools.
Instead, we now double check the resources really belong to the LB
we're trying to delete.
Fixes issue #33759
See issue #33128
We can't rely on the device name provided by Cinder, and thus must perform
detection based on the drive serial number (aka It's cinder ID) on the
kubelet itself.
This patch re-works the cinder volume attacher to ignore the supplied
deviceName, and instead defer to the pre-existing GetDevicePath method to
discover the device path based on it's serial number and /dev/disk/by-id
mapping.
This new behavior is controller by a config option, as falling back
to the cinder value when we can't discover a device would risk devices
not showing up, falling back to cinder's guess, and detecting the wrong
disk as attached.
We are more liberal in what we accept as a volume id in k8s, and indeed
we ourselves generate names that look like `aws://<zone>/<id>` for
dynamic volumes.
This volume id (hereafter a KubernetesVolumeID) cannot directly be
compared to an AWS volume ID (hereafter an awsVolumeID).
We introduce types for each, to prevent accidental comparison or
confusion.
Issue #35746
GetDevicePath was currently coded to only support Nova+KVM style device
paths, update so we also support Nova+ESXi and leave the code such that
new pattern additions are easy.
Neutron's API ignores unknown paramaters. When listing pools etc, K8
attempts to filter on "LoadBalancerID", which is not a valid filter.
As such, it is ignored by Neutron, and a list of all pools is
returned. K8 then proceeds to update each of the pools.
Instead, we now double check the resources really belong to the LB
we're trying to update.
At master volume reconciler, the information about which volumes are
attached to nodes is cached in actual state of world. However, this
information might be out of date in case that node is terminated (volume
is detached automatically). In this situation, reconciler assume volume
is still attached and will not issue attach operation when node comes
back. Pods created on those nodes will fail to mount.
This PR adds the logic to periodically sync up the truth for attached volumes kept in the actual state cache. If the volume is no longer attached to the node, the actual state will be updated to reflect the truth. In turn, reconciler will take actions if needed.
To avoid issuing many concurrent operations on cloud provider, this PR
tries to add batch operation to check whether a list of volumes are
attached to the node instead of one request per volume.
More details are explained in PR #33760
Automatic merge from submit-queue
vSphere cloud provider: re-use session for vCenter logins
This change allows for the re-use of a vCenter client session. Addresses #34491
In order to be able to use new mounter library, this PR adds the
mounterPath flag to kubelet which passes the flag to the mount
interface. If flag is empty, mount uses default mount path.
Automatic merge from submit-queue
Loadbalanced client src ip preservation enters beta
Sounds like we're going to try out the proposal (https://github.com/kubernetes/kubernetes/issues/30819#issuecomment-249877334) for annotations -> fields on just one feature in 1.5 (scheduler). Or do we want to just convert to fields right now?
'names' is an array of FQDNs. 'instances' is a map indexed by canonicalized
name. Clearly these two won't always match, so when building the final
instance array to return, make sure to look up map entries by their canonicalized
name.
In the below example, "ocp-master-5pob" is clearly found as a GCE instance
but when building the final instance array it cannot be matched as the code
is looking for "ocp-master-5pob.c.ose-refarch.internal" instead. The node
is then deleted from the cluster as it cannot be found by the cloud provider.
gce.go:2519] ### getInstancesByNames([ocp-master-5pob.c.ose-refarch.internal]): initial node prefix ocp-
gce.go:2530] ### getInstancesByNames([ocp-master-5pob.c.ose-refarch.internal]): looking for instances map[ocp-master-5pob:<nil>]
gce.go:2533] ### getInstancesByNames([ocp-master-5pob.c.ose-refarch.internal]): getting zone 'europe-west1-c' (remaining 1)
gce.go:2563] ### getInstancesByNames([ocp-master-5pob.c.ose-refarch.internal]): instance name <omitted> not requested
gce.go:2563] ### getInstancesByNames([ocp-master-5pob.c.ose-refarch.internal]): instance name <omitted> not requested
gce.go:2533] ### getInstancesByNames([ocp-master-5pob.c.ose-refarch.internal]): getting zone 'europe-west1-b' (remaining 1)
gce.go:2563] ### getInstancesByNames([ocp-master-5pob.c.ose-refarch.internal]): instance name <omitted> not requested
gce.go:2576] ### getInstancesByNames([ocp-master-5pob.c.ose-refarch.internal]): found instance 'ocp-master-5pob' remaining 0
gce.go:2563] ### getInstancesByNames([ocp-master-5pob.c.ose-refarch.internal]): instance name <omitted> not requested
gce.go:2533] ### getInstancesByNames([ocp-master-5pob.c.ose-refarch.internal]): getting zone 'europe-west1-d' (remaining 0)
gce.go:2588] Failed to retrieve instance: "ocp-master-5pob.c.ose-refarch.internal"
gce.go:2624] ### getInstanceByName(ocp-master-5pob.c.ose-refarch.internal): got []: instance not found
gce.go:2626] getInstanceByName/multiple-zones: failed to get instance ocp-master-5pob.c.ose-refarch.internal; err: instance not found
nodecontroller.go:587] Deleting node (no longer present in cloud provider): ocp-master-5pob.c.ose-refarch.internal
nodecontroller.go:664] Recording Deleting Node ocp-master-5pob.c.ose-refarch.internal because it's not present according to cloud provider event message for node ocp-master-5pob.c.ose-refarch.internal
Automatic merge from submit-queue
azure: lower log priority for skipped nic update message
**What this PR does / why we need it**: Very minor, just wanted to remove some log noise I introduced in #34526.
I chose `V(3)` since it aligns with the other nicupdate message printed out here, and will be hidden for the usual default of `--v=2`.
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
NONE
```