Automatic merge from submit-queue
Prune reviewers from pkg/cloudprovider
**What this PR does / why we need it**
Per discussion in https://github.com/kubernetes/kubernetes/pull/36530 the `OWNERS` file for `pkg/cloudprovider` should not contain additional reviewers at this time.
**Special notes for your reviewer**:
Sorry for the extra work in review
**Release note**:
`NONE`
Automatic merge from submit-queue
Curating Owners: pkg/cloudprovider
cc @runseb @justinsb @kerneltime @mikedanese @svanharmelen @anguslees @brendandburns @abrarshivani @imkin @luomiao @colemickens @ngtuna @dagnello @abithap
In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.
If You Care About the Process:
------------------------------
We did this by algorithmically figuring out who’s contributed code to
the project and in what directories. Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.
Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).
At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
Also, see https://github.com/kubernetes/contrib/issues/1389.
TLDR:
-----
As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.
3. Notify me if you want some OWNERS file to be removed. Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
Automatic merge from submit-queue (batch tested with PRs 39625, 39842)
AWS: Remove duplicate calls to DescribeInstance during volume operations
This change removes all duplicate calls to describeInstance
from aws volume code path.
**What this PR does / why we need it**:
This PR removes the duplicate calls present in disk check code paths in AWS. I can confirm that `getAWSInstance` actually returns all instance information already and hence there is no need of making separate `describeInstance` call.
Related to - https://github.com/kubernetes/kubernetes/issues/39526
cc @justinsb @jsafrane
Automatic merge from submit-queue
AWS: Add exponential backoff to waitForAttachmentStatus() and createTags()
We should use exponential backoff while waiting for a volume to get attached/detached to/from a node. This will lower AWS load and reduce API call throttling.
This partly fixes#33088
@justinsb, can you please take a look?
Automatic merge from submit-queue
Changed default scsi controller type in vSphere Cloud Provider
This PR changes default scsi controller to ```pvscsi``` in vSphere Cloud Provider. Fixes#37527
Automatic merge from submit-queue (batch tested with PRs 38818, 38813, 38820)
AWS: Add sequential allocator for device names.
On AWS, we should not reuse device names as long as possible, see https://aws.amazon.com/premiumsupport/knowledge-center/ebs-stuck-attaching/
> "If you specify a device name that is not in use by EC2, but is being used by the block device driver within the EC2 instance, the attachment of the EBS volume does not succeed and the EBS volume is stuck in the attaching state."
This patch adds a device name allocator that tries to find a name that's next to the last used device name instead of using the first available one. This way we will loop through all device names ("xvdba" .. "xvdzz") before a device name is reused.
Fixes: #31891
@wongma7, @gnufied, @childsb PTAL
On AWS, we should not reuse device names as long as possible, see
https://aws.amazon.com/premiumsupport/knowledge-center/ebs-stuck-attaching/
"If you specify a device name that is not in use by EC2, but is being used by
the block device driver within the EC2 instance, the attachment of the EBS
volume does not succeed and the EBS volume is stuck in the attaching state."
This patch adds a device name allocator that tries to find a name that's next
to the last used device name instead of using the first available one.
This way we will loop through all device names ("xvdba" .. "xvdzz") before
a device name is reused.
We should use exponential backoff while waiting for a volume to get attached/
detached to/from a node. This will lower AWS load and reduce its API call
throttling.
Automatic merge from submit-queue (batch tested with PRs 38638, 38334)
Remove Azure Subnet RouteTable check
**What this PR does / why we need it**:
PR Removes the subnet configuration check for Azure cloudprovider. The subnet check ensures that the subnet is associated with the Route Table. However if the VNET is in a different Azure Resource Group then the check fails, even if the subnet is already valid. This a stop gap fix, to allow Kubernetes to be deployed to Custom VNETs in Azure, that may reside in a different resource group to the cluster.
fixes#38134
@colemickens
Automatic merge from submit-queue
Bad conditional in vSphereLogin function
```release-note
Fixes NotAuthenticated errors that appear in the kubelet and kube-controller-manager due to never logging in to vSphere
```
With this conditional being == instead of !=, a login would never actually be attempted by this provider, and disk attachments would fail with a NotAuthenticated error from vSphere.
This method has been unused by k8s for some time, and yet is the last
piece of the cloud provider API that encourages provider names to be
human-friendly strings (this method applies a regex to instance names).
Actually removing this deprecated method is part of a long effort to
migrate from instance names to instance IDs in at least the OpenStack
provider plugin.
Automatic merge from submit-queue
openstack: Implement the `Routes` provider API
``` release-note
Implement the Routes provider API for OpenStack using Neutron extraroute extension. This removes the need for flannel/etc where supported. To use, ensure all your nodes are on the same Neutron (private) network and specify the router ID in new `[Route]` section of provider config:
[Route]
router-id = <router UUID>
```
Automatic merge from submit-queue (batch tested with PRs 36543, 38189, 38289, 38291, 36724)
context.Context should be the first parameter of a function in vsphere
**What this PR does / why we need it**:
Change the position of the context.Context parameter.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
golint
**Release note**:
```release-note
```
Signed-off-by: yupeng <yu.peng36@zte.com.cn>
Check for error conditions from the vSphere API and return the err if one occurs. The vSphere API does not return an err for unauthenticated users, it just returns a nil user object.
Automatic merge from submit-queue (batch tested with PRs 38076, 38137, 36882, 37634, 37558)
Allow backendpools in Azure Load Balancers which are not owned by cloud provider
**What this PR does / why we need it**: It fixes#36880
**Which issue this PR fixes**: fixes#36880
**Special notes for your reviewer**:
**Release note**:
```release-note
Allow backendpools in Azure Load Balancers which are not owned by cloud provider
```
Instead of bailing out when we find another backend pool, we just ignore
other backend pools and add ours to the list of existing.
Fixes#36880
This change implements the Routes API using Neutron's "extraroute"
extension.
To use, this requires all the nodes to be on the same Neutron network
and the UUID of the Neutron router on that network.
Required cloud provider config section:
[Route]
router-id = <UUID of Neutron router>
Ensure kube-controllermanager is started with (non-default)
`--allocate-node-cidrs=true` and set `--cluster-cidr` to the POD
super-subnet (a private /16 would be reasonable).
Based on an earlier version by @timbyr (#19473)