Automatic merge from submit-queue (batch tested with PRs 48292, 48121)
Add Google cloudkms dependency, add cloudkms service to GCE cloud provider
Required to introduce a Google KMS based envelope encryption, which shall allow encrypting secrets at rest using KEK-DEK scheme.
The above requires KMS API to create/delete KeyRings and CryptoKeys, and Encrypt/Decrypt data.
Should target release 1.8
@jcbsmpsn
Update: It appears that Godep only allows dependencies which are in use. We may have to modify this PR to include some Google KMS code.
Progresses #48522
Automatic merge from submit-queue
Use %q formatter for error messages from the AWS SDK. #47789
Error messages from the AWS SDK can have return keys in them, so use %q formatter for those messages.
Automatic merge from submit-queue (batch tested with PRs 47918, 47964, 48151, 47881, 48299)
Add ApiEndpoint support to GCE config.
**What this PR does / why we need it**:
Add the ability to change ApiEndpoint for GCE.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 47286, 47729)
Add client certificate authentication to Azure cloud provider
This adds support for client cert authentication in Azure cloud provider. The certificate can be provided in PKCS #12 format with password protection. Not that this authentication will be active only when no client secret is configured.
cc @brendandburns @colemickens
Automatic merge from submit-queue (batch tested with PRs 48139, 48042, 47645, 48054, 48003)
Pipe clusterID into gce_loadbalancer_external.go
**What this PR does / why we need it**: Small cleanup for GCE ELB codes.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#48002
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47776, 46220, 46878, 47942, 47947)
fix comment mistake
fix comment mistake
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 47776, 46220, 46878, 47942, 47947)
update openstack metadata-service url
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue
Ignore ErrNotFound when delete LB resources
IsNotFound error is fine since that means the object is
deleted already, so let's check it before return error.
Automatic merge from submit-queue (batch tested with PRs 47915, 47856, 44086, 47575, 47475)
AWS: Fix suspicious loop comparing permissions
Because we only ever call it with a single UserId/GroupId, this would
not have been a problem in practice, but this fixes the code.
Fix#36902
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47922, 47195, 47241, 47095, 47401)
AWS: Set CredentialsChainVerboseErrors
This avoids a rather confusing error message.
Fix#39374
```release-note
NONE
```
Automatic merge from submit-queue
Add E2E tests for Azure internal loadbalancer support, fix an issue for public IP resource deletion.
**What this PR does / why we need it**:
- Add E2E tests for Azure internal loadbalancer support: https://github.com/kubernetes/kubernetes/pull/43510
- Fix an issue that public IP resource not get deleted when switching from external loadbalancer to internal static loadbalancer.
**Special notes for your reviewer**:
1. Add new Azure resource tag to Public IP resources to indicate kubernetes managed resources.
Currently we determine whether the public IP resource should be deleted by looking at LoadBalancerIp property on spec. In the scenario 'Switching from external loadbalancer to internal loadbalancer with static IP', that value might have been updated for internal loadbalancer. So here we're to add an explicit tag for kubernetes managed resources.
2. Merge cleanupPublicIP logic into cleanupLoadBalancer
**Release note**:
NONE
CC @brendandburns @colemickens
Automatic merge from submit-queue
New annotation to add existing Security Groups to ELBs created by AWS cloudprovider
**What this PR does / why we need it**:
When K8S cluster is deployed in existing VPC there might be a need to attach extra SecurityGroups to ELB created by AWS cloudprovider. Example of it can be cases, where such Security Groups are maintained by another team.
**Special notes for your reviewer**:
For tests to pass depends on https://github.com/kubernetes/kubernetes/pull/45168 and therefore includes it
**Release note**:
```release-note
New 'service.beta.kubernetes.io/aws-load-balancer-extra-security-groups' Service annotation to specify extra Security Groups to be added to ELB created by AWS cloudprovider
```
Automatic merge from submit-queue
AWS: Remove blackhole routes in our managed range
Blackhole routes otherwise acccumulate unboundedly. We also are careful
to ensure that we do so only within the managed range, which requires
enlisting the help of the routecontroller.
Fix#47524
```release-note
AWS: clean up blackhole routes when using kubenet
```
We maintain a cache of all instances, and we invalidate the cache
whenever we see a new instance. For ELBs that should be sufficient,
because our usage is limited to instance ids and security groups, which
should not change.
Fix#45050
Automatic merge from submit-queue (batch tested with PRs 47510, 47516, 47482, 47521, 47537)
Batch AWS getInstancesByNodeNames calls with FilterNodeLimit
We are going to limit the getInstancesByNodeNames call with a batch
size of 150.
Fixes - #47271
```release-note
AWS: Batch DescribeInstance calls with nodeNames to 150 limit, to stay within AWS filter limits.
```
Blackhole routes otherwise acccumulate unboundedly. We also are careful
to ensure that we do so only within the managed range, which requires
enlisting the help of the routecontroller.
Fix#47524
Automatic merge from submit-queue
AWS: Process disk attachments even with duplicate NodeNames
Fix#47404
```release-note
AWS: Process disk attachments even with duplicate NodeNames
```
Automatic merge from submit-queue (batch tested with PRs 47302, 47389, 47402, 47468, 47459)
[GCE] Fix ILB sharing and GC
Fixes#47092
- Users must opt-in for sharing backend services (alpha feature - may be removed in future release)
- Shared backend services use a hash for determining similarity via settings (so far, only sessionaffinity) (again, this may be removed)
- Move resource cleanup to after the ILB setup.
/assign @bowei
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 46929, 47391, 47399, 47428, 47274)
AWS: Richer log message when metadata fails
Not a resolution, but should at least help determine the issue.
Issue #41904
```release-note
NONE
```
Automatic merge from submit-queue
AWS for cloud-controller-manager
fixes#47214
This implements the NodeAddressesByProviderID and InstanceTypeByProviderID methods used by the cloud-controller-manager for the AWS provider.
NodeAddressesByProvider uses DescribeInstances (for normal addresses) and DescribeAddresses (for Elastic IP addresses).
InstanceTypeByProviderID uses DescribeInstances.
```release-note
NONE
```
Automatic merge from submit-queue
Azure for cloud-controller-manager
**What this PR does / why we need it**:
This implements the NodeAddressesByProviderID and InstanceTypeByProviderID methods used by the cloud-controller-manager to the Azure provider.
**Release note**:
```release-note
NONE
```
Addresses #47257
Service objects can be annotated with
`service.beta.kubernetes.io/aws-load-balancer-extra-security-groups`
to specify existing security groups to be added to ELB
created by AWS cloudprovider
Automatic merge from submit-queue (batch tested with PRs 43005, 46660, 46385, 46991, 47103)
Azure cloudprovider retry using flowcontrol
An initial attempt at engaging exponential backoff for API error responses.
Addresses #47048
Uses k8s.io/client-go/util/flowcontrol; implementation inspired by GCE
cloudprovider backoff.
**What this PR does / why we need it**:
The existing azure cloudprovider implementation has no guard rails in place to adapt to unexpected underlying operational conditions (i.e., clogs in resource plumbing between k8s runtime and the cloud API). The purpose of these changes is to support exponential backoff wrapping around API calls; and to support targeted rate limiting. Both of these options are configurable via `--cloud-config`.
Implementation inspired by the GCE's use of `k8s.io/client-go/util/flowcontrol` and `k8s.io/apimachinery/pkg/util/wait`, this PR likewise uses `flowcontrol` for rate limiting; and `wait` to thinly wrap backoff retry attempts to the API.
**Special notes for your reviewer**:
Pay especial note to the declaration of retry-able conditions from an unsuccessful HTTP request:
- all `4xx` and `5xx` HTTP responses
- non-nil error responses
And the declaration of retry success conditions:
- `2xx` HTTP responses
Tests updated to include additions to `Config`.
Those may be incomplete, or in other ways non-representative.
**Release note**:
```release-note
Added exponential backoff to Azure cloudprovider
```
- leveraging Config struct (—cloud-config) to store backoff and rate limit on/off and performance configuration
- added add’l error logging
- enabled backoff for vm GET requests
Automatic merge from submit-queue (batch tested with PRs 36721, 46483, 45500, 46724, 46036)
AWS: Allow configuration of a single security group for ELBs
**What this PR does / why we need it**:
AWS has a hard limit on the number of Security Groups (500). Right now every time an ELB is created Kubernetes is creating a new Security Group. This allows for specifying a Security Group to use for all ELBS
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
For some reason the Diff tool makes this look like it was way more changes than it really was.
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 46239, 46627, 46346, 46388, 46524)
move labels to components which own the APIs
During the apimachinery split in 1.6, we accidentally moved several label APIs into apimachinery. They don't belong there, since the individual APIs are not general machinery concerns, but instead are the concern of particular components: most commonly the kubelet. This pull moves the labels into their owning components and out of API machinery.
@kubernetes/sig-api-machinery-misc @kubernetes/api-reviewers @kubernetes/api-approvers
@derekwaynecarr since most of these are related to the kubelet
- added info and error logs for appropriate backoff conditions/states
- rationalized log idioms across all resource requests that are backoff-enabled
- processRetryResponse as a wait.ConditionFunc needs to supress errors if it wants the caller to continue backing off
Automatic merge from submit-queue (batch tested with PRs 43505, 45168, 46439, 46677, 46623)
fix AWS tagging to add missing tags only
It seems that intention of original code was to build map of missing
tags and call AWS API to add just them, but due to typo full
set of tags was always (re)added
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 46686, 45049, 46323, 45708, 46487)
Log an EBS vol's instance when attaching fails because VolumeInUse
Messages now look something like this:
E0427 15:44:37.617134 16932 attacher.go:73] Error attaching volume "vol-00095ddceae1a96ed": Error attaching EBS volume "vol-00095ddceae1a96ed" to instance "i-245203b7": VolumeInUse: vol-00095ddceae1a96ed is already attached to an instance
status code: 400, request id: f510c439-64fe-43ea-b3ef-f496a5cd0577. The volume is currently attached to instance "i-072d9328131bcd9cd"
weird that AWS doesn't bother to put that information in there for us (it does when you try to delete a vol that's in use)
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 46394, 46650, 46436, 46673, 46212)
refactor and export openstack service clients
**What this PR does / why we need it**:
Refactor and export openstack service client.
Exporting OpenStack client so other projects can use the them to call functions that are not implemented in openstack cloud providers yet.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
An initial attempt at engaging exponential backoff for API error responses.
Uses k8s.io/client-go/util/flowcontrol; implementation inspired by GCE
cloudprovider backoff.
Automatic merge from submit-queue (batch tested with PRs 46489, 46281, 46463, 46114, 43946)
AWS: consider instances of all states in DisksAreAttached, not just "running"
Require callers of `getInstancesByNodeNames(Cached)` to specify the states they want to filter instances by, if any. DisksAreAttached, cannot only get "running" instances because of the following attach/detach bug we discovered:
1. Node A stops (or reboots) and stays down for x amount of time
2. Kube reschedules all pods to different nodes; the ones using ebs volumes cannot run because their volumes are still attached to node A
3. Verify volumes are attached check happens while node A is down
4. Since aws ebs bulk verify filters by running nodes, it assumes the volumes attached to node A are detached and removes them all from ASW
5. Node A comes back; its volumes are still attached to it but the attach detach controller has removed them all from asw and so will never detach them even though they are no longer desired on this node and in fact desired elsewhere
6. Pods cannot run because their volumes are still attached to node A
So the idea here is to remove the wrong assumption that callers of `getInstancesByNodeNames(Cached)` only want "running" nodes.
I hope this isn't too confusing, open to alternative ways of fixing the bug + making the code nice.
ping @gnufied @kubernetes/sig-storage-bugs
```release-note
Fix AWS EBS volumes not getting detached from node if routine to verify volumes are attached runs while the node is down
```
Automatic merge from submit-queue
AWS: support node port health check
**What this PR does / why we need it**:
if a custom health check is set from the beta annotation on a service it
should be used for the ELB health check. This patch adds support for
that.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
Let me know if any tests need to be added.
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 46252, 45524, 46236, 46277, 46522)
Make GCE load-balancers create health checks for nodes
From #14661. Proposal on kubernetes/community#552. Fixes#46313.
Bullet points:
- Create nodes health check and firewall (for health checking) for non-OnlyLocal service.
- Create local traffic health check and firewall (for health checking) for OnlyLocal service.
- Version skew:
- Don't create nodes health check if any nodes has version < 1.7.0.
- Don't backfill nodes health check on existing LBs unless users explicitly trigger it.
**Release note**:
```release-note
GCE Cloud Provider: New created LoadBalancer type Service now have health checks for nodes by default.
An existing LoadBalancer will have health check attached to it when:
- Change Service.Spec.Type from LoadBalancer to others and flip it back.
- Any effective change on Service.Spec.ExternalTrafficPolicy.
```
Automatic merge from submit-queue (batch tested with PRs 46383, 45645, 45923, 44884, 46294)
Created unit tests for GCE cloud provider storage interface.
- Currently covers CreateDisk and DeleteDisk, GetAutoLabelsForPD
- Created ServiceManager interface in gce.go to facilitate mocking in tests.
**What this PR does / why we need it**:
Increasing test coverage for GCE Persistent Disk.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#44573
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 45518, 46127, 46146, 45932, 45003)
aws: Support for ELB tagging by users
This PR provides support for tagging AWS ELBs using information in an
annotation and provided as a list of comma separated key-value pairs.
Closes https://github.com/kubernetes/community/pull/404
Automatic merge from submit-queue (batch tested with PRs 45573, 46354, 46376, 46162, 46366)
GCE - Retrieve subnetwork name/url from gce.conf
**What this PR does / why we need it**:
Features like ILB require specifying the subnetwork if the network is type manual.
**Notes:**
The network URL can be [constructed](68e7e18698/pkg/cloudprovider/providers/gce/gce.go (L211-L217)) by fetching instance metadata; however, the subnetwork is not provided through this feature. Users must specify the subnetwork name/url through the gce.conf.
Although multiple subnets can exist in the same region for a network, the cloud provider will only use one subnet url for creating LBs.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Added deprecation notice and guidance for cloud providers.
**What this PR does / why we need it**:
Adding context/background and general guidance for incoming cloud providers.
**Which issue this PR fixes**
**Special notes for your reviewer**:
Generalized message per discussion with @bgrant0607
Automatic merge from submit-queue (batch tested with PRs 38505, 41785, 46315)
Fix provisioned GCE PD not being reused if already exists
@jsafrane PTAL
This is another attempt at https://github.com/kubernetes/kubernetes/pull/38702 . We have observed that `gce.service.Disks.Insert(gce.projectID, zone, diskToCreate).Do()` instantly gets an error response of alreadyExists, so we must check for it.
I am not sure if we still need to check for the error after `waitForZoneOp`; I think that if there is an alreadyExists error, the `Do()` above will always respond with it instantly. But because I'm not sure, and to be safe, I will leave it.
Automatic merge from submit-queue (batch tested with PRs 38505, 41785, 46315)
Only retrieve relevant volumes
**What this PR does / why we need it**:
Improves performance for Cinder volume attach/detach calls.
Currently when Cinder volumes are attached or detached, functions try to retrieve details about the volume from the Nova API. Because some only have the volume name not its UUID, they use the list function in gophercloud to iterate over all volumes to find a match. This incurs severe performance problems on OpenStack projects with lots of volumes (sometimes thousands) since it needs to send a new request when the current page does not contain a match. A better way of doing this is use the `?name=XXX` query parameter to refine the results.
**Which issue this PR fixes**:
https://github.com/kubernetes/kubernetes/issues/26404
**Special notes for your reviewer**:
There were 2 ways of addressing this problem:
1. Use the `name` query parameter
2. Instead of using the list function, switch to using volume UUIDs and use the GET function instead. You'd need to change the signature of a few functions though, such as [`DeleteVolume`](https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/cinder/cinder.go#L49), so I'm not sure how backwards compatible that is.
Since #1 does effectively the same as #2, I went with it because it ensures BC.
One assumption that is made is that the `volumeName` being retrieved matches exactly the name of the volume in Cinder. I'm not sure how accurate that is, but I see no reason why cloud providers would want to append/prefix things arbitrarily.
**Release note**:
```release-note
Improves performance of Cinder volume attach/detach operations
```
An admin wants to specify in which AWS availability zone(s) users may create persistent volumes using dynamic provisioning.
That's why the admin can now configure in StorageClass object a comma separated list of zones. Dynamically created PVs for PVCs that use the StorageClass are created in one of the configured zones.
Automatic merge from submit-queue (batch tested with PRs 45891, 46147)
Watching ClusterId from within GCE cloud provider
**What this PR does / why we need it**:
Adds the ability for the GCE cloud provider to watch a config map for `clusterId` and `providerId`.
WIP - still needs more testing
cc @MrHohn @csbell @madhusudancs @thockin @bowei @nikhiljindal
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
vSphere storage policy support for dynamic volume provisioning
Till now, vSphere cloud provider provides support to configure persistent volume with VSAN storage capabilities - kubernetes#42974. Right now this only works with VSAN.
Also there might be other use cases:
- The user might need a way to configure a policy on other datastores like VMFS, NFS etc.
- Use Storage IO control, VMCrypt policies for a persistent disk.
We can achieve about 2 use cases by using existing storage policies which are already created on vCenter using the Storage Policy Based Management service. The user will specify the SPBM policy ID as part of dynamic provisioning
- resultant persistent volume will have the policy configured with it.
- The persistent volume will be created on the compatible datastore that satisfies the storage policy requirements.
- If there are multiple compatible datastores, the datastore with the max free space would be chosen by default.
- If the user specifies the datastore along with the storage policy ID, the volume will created on this datastore if its compatible. In case if the user specified datastore is incompatible, it would error out the reasons for incompatibility to the user.
- Also, the user will be able to see the associations of persistent volume object with the policy on the vCenter once the volume is attached to the node.
For instance in the below example, the volume will created on a compatible datastore with max free space that satisfies the "Gold" storage policy requirements.
```
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
storagepolicyName: Gold
```
For instance in the below example, the vSphere CP checks if "VSANDatastore" is compatible with "Gold" storage policy requirements. If yes, volume will be provisioned on "VSANDatastore" else it will error that "VSANDatastore" is not compatible with the exact reason for failure.
```
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: zeroedthick
storagepolicyName: Gold
datastore: VSANDatastore
```
As a part of this change, 4 commits have been added to this PR.
1. Vendor changes for vmware/govmomi
2. Changes to the VsphereVirtualDiskVolumeSource in the Kubernetes API. Added 2 additional fields StoragePolicyName, StoragePolicyID
3. Swagger and Open spec API changes.
4. vSphere Cloud Provider changes to implement the storage policy support.
**Release note**:
```release-note
vSphere cloud provider: vSphere Storage policy Support for dynamic volume provisioning
```
Automatic merge from submit-queue
Adapt loadbalancer deleting/updating when using cloudprovider openstack in openstack/liberty
**What this PR does / why we need it**:
Make an extra verification on the returned listeners and pools because gophercloud query doesn't filter the results by loadbalancerID / listenerID respectively when using **openstack/librerty**.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
#33759
**Special notes for your reviewer**:
#33759 it's supposed to have a pull request which fixes this problem but in the release 1.5 loadbalancers doesn't use that patched code.
**Release note**:
NONE
```release-note
```
This PR provides support for tagging AWS ELBs using information in an
annotation and provided as a list of comma separated key-value pairs.
Closes https://github.com/kubernetes/community/pull/404
Automatic merge from submit-queue
Initialize cloud providers with a K8s clientBuilder
**What this PR does / why we need it**:
This PR provides each cloud provider the ability to generate kubernetes clients. Either the full access or service account client builder is passed from the controller manager. Cloud providers could need to retrieve information from the cluster that isn't provided through defined interfaces, and this seems more preferable to adding parameters.
Please leave your thoughts/comments.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Same internal and external ip for vSphere Cloud Provider
Currently, vSphere Cloud Provider reports internal ip as container ip addresses. This PR modifies vSphere Cloud Provider to report same ip address as both internal and external that is provided by vmware infrastructure.
cc @pdhamdhere @tusharnt @BaluDontu @divyenpatel @luomiao
Automatic merge from submit-queue
Add approvers to vsphere cloudprovider
This PR adds approvers for vSphere Cloud provider.
cc @pdhamdhere @tusharnt @BaluDontu @divyenpatel @luomiao
Automatic merge from submit-queue
Use beta GCP API instead of alpha in CloudCIDR controller
The feature we are using has been promoted to beta.
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 45374, 44537, 45739, 44474, 45888)
Fix attach volume to instance repeatedly
1.When volume's status is 'attaching', controllermanager will attach
it again and return err. So it is necessary to check volume's
status before attach/detach volume.
2. When volume's status is 'attaching', its attachments will be None,
controllermanager can't get device path and make some failed event.
But it is normal, so don't return err when attachments is None
Fix bug: #44536
Automatic merge from submit-queue (batch tested with PRs 45623, 45241, 45460, 41162)
Promotes Source IP preservation for Virtual IPs from Beta to GA
Fixes#33625. Feature issue: kubernetes/features#27.
Bullet points:
- Declare 2 fields (ExternalTraffic and HealthCheckNodePort) that mirror the ESIPP annotations.
- ESIPP alpha annotations will be ignored.
- Existing ESIPP beta annotations will still be fully supported.
- Allow promoting beta annotations to first class fields or reversely.
- Disallow setting invalid ExternalTraffic and HealthCheckNodePort on services. Default ExternalTraffic field for nodePort or loadBalancer type service to "Global" if not set.
**Release note**:
```release-note
Promotes Source IP preservation for Virtual IPs to GA.
Two api fields are defined correspondingly:
- Service.Spec.ExternalTrafficPolicy <- 'service.beta.kubernetes.io/external-traffic' annotation.
- Service.Spec.HealthCheckNodePort <- 'service.beta.kubernetes.io/healthcheck-nodeport' annotation.
```
When volume's status is 'attaching', its attachments will be None,
controllermanager can't get device path and make some failed event.
But it is normal, let's fix it.
Automatic merge from submit-queue (batch tested with PRs 45569, 45602, 45604, 45478, 45550)
Fixing VolumesAreAttached and DisksAreAttached functions in vSphere
**What this PR does / why we need it**:
In the vSphere HA, when node fail over happens, node VM momentarily goes in to “not connected” state. During this time, if kubernetes calls VolumesAreAttached function, we are returning incorrect map, with status for volume set to false - detached state.
Volumes attached to previous nodes, requires to be detached before they can attach to the new node. Kubernetes attempt to check volume attachment. When node VM is not accessible or for any reason we cannot determine disk is attached, we were returning a Map of volumepath and its attachment status set to false. This was misinterpreted as disks are already detached from the node and Kubernetes was marking volumes as detached after orphaned pod is cleaned up. This causes volumes to remain attached to previous node, and pod creation always remains in the “containercreating” state. Since both the node are powered on, volumes can not be attached to new node.
**Logs before fix**
```
{"log":"E0508 21:31:20.902501 1 vsphere.go:1053] disk uuid not found for [vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b75170e-342d-11e7-bab5-0050568aeb0a.vmdk. err: No disk UUID fou
nd\n","stream":"stderr","time":"2017-05-08T21:31:20.902792337Z"}
{"log":"E0508 21:31:20.902552 1 vsphere.go:1041] Failed to check whether disk is attached. err: No disk UUID found\n","stream":"stderr","time":"2017-05-08T21:31:20.902842673Z"}
{"log":"I0508 21:31:20.902575 1 attacher.go:114] VolumesAreAttached: check volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b75170e-342d-11e7-bab5-0050568aeb0a.vmdk\" (specName
: \"pvc-8b75170e-342d-11e7-bab5-0050568aeb0a\") is no longer attached\n","stream":"stderr","time":"2017-05-08T21:31:20.902849717Z"}
{"log":"I0508 21:31:20.902596 1 operation_generator.go:166] VerifyVolumesAreAttached determined volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-8b7
5170e-342d-11e7-bab5-0050568aeb0a.vmdk\" (spec.Name: \"pvc-8b75170e-342d-11e7-bab5-0050568aeb0a\") is no longer attached to node \"node3\", therefore it was marked as detached.\n","stream":"s
tderr","time":"2017-05-08T21:31:20.902863097Z"}
```
In this change, we are making sure correct volume attachment map is returned, and in case of any error occurred while checking disk’s status, we return nil map.
**Logs after fix**
```
{"log":"E0509 20:25:37.982152 1 vsphere.go:1067] Failed to check whether disk is attached. err: No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982516134Z"}
{"log":"E0509 20:25:37.982190 1 attacher.go:104] Error checking if volumes ([[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk [vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk [vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk]) are attached to current node (\"node3\"). err=No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982521101Z"}
{"log":"E0509 20:25:37.982220 1 operation_generator.go:158] VolumesAreAttached failed for checking on node \"node3\" with: No disk UUID found\n","stream":"stderr","time":"2017-05-09T20:25:37.982526285Z"}
{"log":"I0509 20:25:39.157279 1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c268f141-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157724393Z"}
{"log":"I0509 20:25:39.157329 1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157787946Z"}
{"log":"I0509 20:25:39.157367 1 attacher.go:115] VolumesAreAttached: volume \"[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\" (specName: \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\") is attached\n","stream":"stderr","time":"2017-05-09T20:25:39.157794586Z"}
```
```
{"log":"I0509 20:25:41.267425 1 reconciler.go:173] Started DetachVolume for volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\" from node \"node3\"\n","stream":"stderr","time":"2017-05-09T20:25:41.267883567Z"}
{"log":"I0509 20:25:41.271836 1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:25:41.272703255Z"}
{"log":"I0509 20:25:47.928021 1 operation_generator.go:341] DetachVolume.Detach succeeded for volume \"pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c26fcae8-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:25:47.928348553Z"}
{"log":"I0509 20:26:12.535962 1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:12.536055214Z"}
{"log":"I0509 20:26:14.188580 1 operation_generator.go:341] DetachVolume.Detach succeeded for volume \"pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c25d08d3-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:14.188792677Z"}
{"log":"I0509 20:26:40.355656 1 reconciler.go:173] Started DetachVolume for volume \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\" from node \"node3\"\n","stream":"stderr","time":"2017-05-09T20:26:40.355922165Z"}
{"log":"I0509 20:26:40.357988 1 operation_generator.go:694] Verified volume is safe to detach for volume \"pvc-c268f141-34f2-11e7-9303-0050568a3ac1\" (UniqueName: \"kubernetes.io/vsphere-volume/[vsanDatastore] kubevols/kubernetes-dynamic-pvc-c268f141-34f2-11e7-9303-0050568a3ac1.vmdk\") on node \"node3\" \n","stream":"stderr","time":"2017-05-09T20:26:40.358177953Z"}
```
**Which issue this PR fixes**
fixes#45464, https://github.com/vmware/kubernetes/issues/116
**Special notes for your reviewer**:
Verified this change on locally built hyperkube image - v1.7.0-alpha.3.147+3c0526cb64bdf5-dirty
**performed many fail over with large volumes (30GB) attached to the pod.**
$ kubectl describe pod
Name: wordpress-mysql-2789807967-3xcvc
Node: node3/172.1.87.0
Status: Running
Powered Off node3's host. pod failed over to node2. Verified all 3 disks detached from node3 and attached to node2.
$ kubectl describe pod
Name: wordpress-mysql-2789807967-qx0b0
Node: node2/172.1.9.0
Status: Running
Powered Off node2's host. pod failed over to node3. Verified all 3 disks detached from node2 and attached to node3.
$ kubectl describe pod
Name: wordpress-mysql-2789807967-7849s
Node: node3/172.1.87.0
Status: Running
Powered Off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1.
$ kubectl describe pod
Name: wordpress-mysql-2789807967-26lp1
Node: node1/172.1.98.0
Status: Running
Powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.
$ kubectl describe pods
Name: wordpress-mysql-2789807967-4pdtl
Node: node3/172.1.87.0
Status: Running
Powered off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1.
$ kubectl describe pod
Name: wordpress-mysql-2789807967-t375f
Node: node1/172.1.98.0
Status: Running
Powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.
$ kubectl describe pods
Name: wordpress-mysql-2789807967-pn6ps
Node: node3/172.1.87.0
Status: Running
powered off node3's host. pod failed over to node1. Verified all 3 disks detached from node3 and attached to node1
$ kubectl describe pods
Name: wordpress-mysql-2789807967-0wqc1
Node: node1/172.1.98.0
Status: Running
powered off node1's host. pod failed over to node3. Verified all 3 disks detached from node1 and attached to node3.
$ kubectl describe pods
Name: wordpress-mysql-2789807967-821nc
Node: node3/172.1.87.0
Status: Running
**Release note**:
```release-note
NONE
```
CC: @BaluDontu @abrarshivani @luomiao @tusharnt @pdhamdhere
Automatic merge from submit-queue (batch tested with PRs 43067, 45586, 45590, 38636, 45599)
AWS: Remove check that forces loadBalancerSourceRanges to be 0.0.0.0/0.
fixes#38633
Remove check that forces loadBalancerSourceRanges to be 0.0.0.0/0. Also, remove check that forces service.beta.kubernetes.io/aws-load-balancer-internal annotation to be 0.0.0.0/0. Ideally, it should be a boolean, but for backward compatibility, leaving it to be a non-empty value
Automatic merge from submit-queue (batch tested with PRs 45382, 45384, 44781, 45333, 45543)
azure: improve user agent string
**What this PR does / why we need it**: the UA string doesn't actually contain "kubernetes" in it
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**: none
**Release note**:
```release-note
NONE
```
cc: @brendandburns
Automatic merge from submit-queue
azure: load balancer: support UDP, fix multiple loadBalancerSourceRanges support, respect sessionAffinity
**What this PR does / why we need it**:
1. Adds support for UDP ports
2. Fixes support for multiple `loadBalancerSourceRanges`
3. Adds support the Service spec's `sessionAffinity`
4. Removes dead code from the Instances file
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#43683
**Special notes for your reviewer**: n/a
**Release note**:
```release-note
azure: add support for UDP ports
azure: fix support for multiple `loadBalancerSourceRanges`
azure: support the Service spec's `sessionAffinity`
```
Automatic merge from submit-queue
Filter out IPV6 addresses from NodeAddresses() returned by vSphere
The vSphere CP returns both IPV6 and IPV4 addresses for a Node as part of NodeAddresses() implementation. However, Kubelet fails due to duplicate api.NodeAddress value when the node has an IPV6 address associated with it. This issue is tracked in #42690. The following are observed:
- when we enabled the logs and checked the addresses sent by vSphere CP to Kubelet, we don't see any duplicate addresses at all.
- Also, kubelet_node_status doesn’t receive any duplicate address from cloud provider.
However, when we filter out the IPV6 addresses and only return IPV4 addresses to the Kubelet, it works perfectly fine.
Even though the Kubelet receives the non-duplicate node-addresses, it still errors out with duplicate node addresses. It might be an issue when kubelet propagates these addresses to API server (or) API server is enable to handle IPV6 addresses.
@divyenpatel @abrarshivani @pdhamdhere @tusharnt
**Release note**:
```release-note
None
```
Automatic merge from submit-queue
Statefulsets for cinder: allow multi-AZ deployments, spread pods across zones
**What this PR does / why we need it**: Currently if we do not specify availability zone in cinder storageclass, the cinder is provisioned to zone called nova. However, like mentioned in issue, we have situation that we want spread statefulset across 3 different zones. Currently this is not possible with statefulsets and cinder storageclass. In this new solution, if we leave it empty the algorithm will choose the zone for the cinder drive similar style like in aws and gce storageclass solutions.
**Which issue this PR fixes** fixes#44735
**Special notes for your reviewer**:
example:
```
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: all
provisioner: kubernetes.io/cinder
---
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: galera
labels:
app: mysql
spec:
ports:
- port: 3306
name: mysql
clusterIP: None
selector:
app: mysql
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mysql
spec:
serviceName: "galera"
replicas: 3
template:
metadata:
labels:
app: mysql
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
containers:
- name: mysql
image: adfinissygroup/k8s-mariadb-galera-centos:v002
imagePullPolicy: Always
ports:
- containerPort: 3306
name: mysql
- containerPort: 4444
name: sst
- containerPort: 4567
name: replication
- containerPort: 4568
name: ist
volumeMounts:
- name: storage
mountPath: /data
readinessProbe:
exec:
command:
- /usr/share/container-scripts/mysql/readiness-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 5
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
volumeClaimTemplates:
- metadata:
name: storage
annotations:
volume.beta.kubernetes.io/storage-class: all
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 12Gi
```
If this example is deployed it will automatically create one replica per AZ. This helps us a lot making HA databases.
Current storageclass for cinder is not perfect in case of statefulsets. Lets assume that cinder storageclass is defined to be in zone called nova, but because labels are not added to pv - pods can be started in any zone. The problem is that at least in our openstack it is not possible to use cinder drive located in zone x from zone y. However, should we have possibility to choose between cross-zone cinder mounts or not? Imo it is not good way of doing things that they mount volume from another zone where the pod is located(means more network traffic between zones)? What you think? Current new solution does not allow that anymore (should we have possibility to allow it? it means removing the labels from pv).
There might be some things that needs to be fixed still in this release and I need help for that. Some parts of the code is not perfect.
Issues what i am thinking about (I need some help for these):
1) Can everybody see in openstack what AZ their servers are? Can there be like access policy that do not show that? If AZ is not found from server specs, I have no idea how the code behaves.
2) In GetAllZones() function, is it really needed to make new serviceclient using openstack.NewComputeV2 or could I somehow use existing one
3) This fetches all servers from some openstack tenant(project). However, in some cases kubernetes is maybe deployed only to specific zone. If kube servers are located for instance in zone 1, and then there are another servers in same tenant in zone 2. There might be usecase that cinder drive is provisioned to zone-2 but it cannot start pod, because kubernetes does not have any nodes in zone-2. Could we have better way to fetch kubernetes nodes zones? Currently that information is not added to kubernetes node labels automatically in openstack (which should I think). I have added those labels manually to nodes. If that zone information is not added to nodes, the new solution does not start stateful pods at all, because it cannot target pods.
cc @rootfs @anguslees @jsafrane
```release-note
Default behaviour in cinder storageclass is changed. If availability is not specified, the zone is chosen by algorithm. It makes possible to spread stateful pods across many zones.
```
Fixes support for multiple instances of loadBalancerSourceRanges.
Previously, the names of the rules for each address range conflicted
causing only one to be applied. Now each gets a unique name.
Automatic merge from submit-queue (batch tested with PRs 45018, 45330)
Add exponential backoff to openstack loadbalancer functions
Using exponential backoff to lower openstack load and reduce API call throttling
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 45508, 44258, 44126, 45441, 45320)
cloud initialize node in external cloud controller
@thockin This PR adds support in the `cloud-controller-manager` to initialize nodes (instead of kubelet, which did it previously)
This also adds support in the kubelet to skip node cloud initialization when `--cloud-provider=external`
Specifically,
Kubelet
1. The kubelet has a new flag called `--provider-id` which uniquely identifies a node in an external DB
2. The kubelet sets a node taint - called "ExternalCloudProvider=true:NoSchedule" if cloudprovider == "external"
Cloud-Controller-Manager
1. The cloud-controller-manager listens on "AddNode" events, and then processes nodes that starts with that above taint. It performs the cloud node initialization steps that were previously being done by the kubelet.
2. On addition of node, it figures out the zone, region, instance-type, removes the above taint and updates the node.
3. Then periodically queries the cloudprovider for node addresses (which was previously done by the kubelet) and updates the node if there are new addresses
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 41903, 45311, 45474, 45472, 45501)
Fetch VM UUID from - /sys/class/dmi/id/product_serial
**What this PR does / why we need it**:
Current code fetch VM uuid using uuid reported at `'/sys/devices/virtual/dmi/id/product_uuid'.` This doesn't work with all the distros like Ubuntu 16.04 and Fedora.
updating code to fetch VM uuid from `/sys/class/dmi/id/product_serial`
**Which issue this PR fixes**
fixes #
**Special notes for your reviewer**:
Verified UUID is matching with VM UUID on ubuntu 16.04, Cent OS 7.3 , and Photon OS
@BaluDontu @tusharnt
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 43006, 45305, 45390, 45412, 45392)
[GCE] Collect latency metric on get/list calls
**What this PR does / why we need it**:
Collects latency & count measurements on GET and LIST operations to GCE cloud.
**Release note**:
```release-note
NONE
```
It seems that intention of original code was to build map of missing
tags and call AWS API to add just them, but due to typo full
set of tags was always (re)added
Automatic merge from submit-queue
Start recording cloud provider metrics for AWS
**What this PR does / why we need it**:
This PR implements support for emitting metrics from AWS about storage operations.
**Which issue this PR fixes**
Fixes https://github.com/kubernetes/features/issues/182
**Release note**:
```
Add support for emitting metrics from AWS cloudprovider about storage operations.
```
Automatic merge from submit-queue (batch tested with PRs 44124, 44510)
Add metrics to all major gce operations (latency, errors)
```release-note
Add metrics to all major gce operations {latency, errors}
The new metrics are:
cloudprovider_gce_api_request_duration_seconds{request, region, zone}
cloudprovider_gce_api_request_errors{request, region, zone}
`request` is the specific function that is used.
`region` is the target region (Will be "<n/a>" if not applicable)
`zone` is the target zone (Will be "<n/a>" if not applicable)
Note: this fixes some issues with the previous implementation of
metrics for disks:
- Time duration tracked was of the initial API call, not the entire
operation.
- Metrics label tuple would have resulted in many independent
histograms stored, one for each disk. (Did not aggregate well).
```
The new metrics is:
cloudprovider_gce_api_request_duration_seconds{request, region, zone}
cloudprovider_gce_api_request_errors{request, region, zone}
`request` is the specific function that is used.
`region` is the target region (Will be "<n/a>" if not applicable)
`zone` is the target zone (Will be "<n/a>" if not applicable)
Note: this fixes some issues with the previous implementation of
metrics for disks:
- Time duration tracked was of the initial API call, not the entire
operation.
- Metrics label tuple would have resulted in many independent
histograms stored, one for each disk. (Did not aggregate well).
Automatic merge from submit-queue
Use provided VipPortID for OpenStack LB
**What this PR does / why we need it**:
When creating an OpenStack LoadBalancer, Kubernetes will search through the tenant trying to match the LB's VIP with a port. This is problematic because multiple ports may have the same fixed IP, therefore leading to routing inconsistencies. We should use the port ID provided by the LB's response body instead.
**Which issue this PR fixes**:
https://github.com/kubernetes/kubernetes/issues/43909
**Special notes for your reviewer**:
Since this involves non-deterministic testing, it'd be best if we can run this in a staging environment for a few days before merging (say until early next week).
**Release note**:
```release-note
Fixes issue during LB creation where ports where incorrectly assigned to a floating IP
```
Automatic merge from submit-queue
cinder: Add support for the KVM virtio-scsi driver
**What this PR does / why we need it**:
The VirtIO SCSI driver for KVM changes the way disks appear in /dev/disk/by-id.
This adds support for the new format.
Without this, volume attaching on an openstack cluster using this kvm driver doesn't work
**Special notes for your reviewer**:
Does this need e2e tests? I couldn't find anywhere to add another openstack configuration used in the e2e tests.
Wiki page about this: https://wiki.openstack.org/wiki/Virtio-scsi-for-bdm
**Release note**:
```release-note
cinder: Add support for the KVM virtio-scsi driver
```
Automatic merge from submit-queue (batch tested with PRs 44594, 44651)
remove strings.compare(), use string native operation
I notice we use strings.Compare() in some code, we can remove it and use native operation.
Automatic merge from submit-queue (batch tested with PRs 44555, 44238)
openstack: remove field flavor_to_resource
I believe there is no usage about `flavor_to_resource`, and I think there is no need to build that information, too.
cc @anguslees
**Release note:**
```
NONE
```
Automatic merge from submit-queue (batch tested with PRs 44687, 44689, 44661)
Fix panic when using `kubeadm init` with vsphere cloud-provider
**What this PR does / why we need it**:
Check if the reference is nil when finding machine reference by UUID.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#44603
**Special notes for your reviewer**:
This is just a quick fix for the panic.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Implement LRU for AWS device allocator
On failure to attach do not use device from pool
In AWS environment when attach fails on the node
lets not use device from the pool. This makes sure
that a bigger pool of devices is available.
I changed the function signature to contain protocol, port, and path.
When the service has a health check path and port set it will create an
HTTP health check that corresponds to the port and path. If those are
not set it will create a standard TCP health check on the first port
from the listeners that is not nil. As far as I know, there is no way to
tell if a Health Check should be HTTP vs HTTPS.
Automatic merge from submit-queue (batch tested with PRs 44645, 44639, 43510)
Add support for Azure internal load balancer
**Which issue this PR fixes**
Fixes https://github.com/kubernetes/kubernetes/issues/38901
**What this PR does / why we need it**:
This PR is to add support for Azure internal load balancer
Currently when exposing a serivce with LoadBalancer type, Azure provider would assume that it requires a public load balancer.
Thus it will request a public IP address resource, and expose the service via that public IP.
In this case we're not able to apply private IP addresses (within the cluster virtual network) for the service.
**Special notes for your reviewer**:
1. Clarification:
a. 'LoadBalancer' refers to an option for 'type' field under ServiceSpec. See https://kubernetes.io/docs/resources-reference/v1.5/#servicespec-v1
b. 'Azure LoadBalancer' refers a type of Azure resource. See https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview
2. For a single Azure LoadBalancer, all frontend ip should reference either a subnet or publicIpAddress, which means that it could be either an Internet facing load balancer or an internal one.
For current provider, it would create an Azure LoadBalancer with generated '${loadBalancerName}' for all services with 'LoadBalancer' type.
This PR introduces name '${loadBalancerName}-internal' for a separate Azure Load Balancer resource, used by all the service that requires internal load balancers.
3. This PR introduces a new annotation for the internal load balancer type behaviour:
a. When the annotaion value is set to 'false' or not set, it falls back to the original behaviour, assuming that user is requesting a public load balancer;
b. When the annotaion value is set to 'true', the following rule applies depending on 'loadBalancerIP' field on ServiceSpec:
- If 'loadBalancerIP' is not set, it will create a load balancer rule with dynamic assigned frontend IP under the cluster subnet;
- If 'loadBalancerIP' is set, it will create a load balancer rule with the frontend IP set to the given value. If the given value is not valid, that is, it does not falls into the cluster subnet range, then the creation will fail.
4. Users may change the load balancer type by applying the annotation to the service at runtime.
In this case, the load balancer rule would need to be 'switched' between the internal one and external one.
For example, it we have a service with internal load balancer, and then user removes the annotation, making it to a public one. Before we creating rules in the public Azure LoadBalancer, we'll need to clean up rules in the internal Azure LoadBalancer.
**Release note**:
Automatic merge from submit-queue
Adding load balancer src cidrs to GCE cloudprovider
**What this PR does / why we need it**:
As of January 31st, 2018, GCP will be sending health checks and l7 traffic from two CIDRs and legacy health checks from three CIDS. This PR moves them into the cloudprovider package and provides a flag for override.
Another PR will need to be address firewall rule creation for external L4 network loadbalancing #40778
**Which issue this PR fixes**
Step one of #40778
Step one of https://github.com/kubernetes/ingress/issues/197
**Release note**:
```release-note
Add flags to GCE cloud provider to override known L4/L7 proxy & health check source cidrs
```
Automatic merge from submit-queue
Add support for IP aliases for pod IPs (GCP alpha feature)
```release-note
Adds support for allocation of pod IPs via IP aliases.
# Adds KUBE_GCE_ENABLE_IP_ALIASES flag to the cluster up scripts (`kube-{up,down}.sh`).
KUBE_GCE_ENABLE_IP_ALIASES=true will enable allocation of PodCIDR ips
using the ip alias mechanism rather than using routes. This feature is currently
only available on GCE.
## Usage
$ CLUSTER_IP_RANGE=10.100.0.0/16 KUBE_GCE_ENABLE_IP_ALIASES=true bash -x cluster/kube-up.sh
# Adds CloudAllocator to the node CIDR allocator (kubernetes-controller manager).
If CIDRAllocatorType is set to `CloudCIDRAllocator`, then allocation
of CIDR allocation instead is done by the external cloud provider and
the node controller is only responsible for reflecting the allocation
into the node spec.
- Splits off the rangeAllocator from the cidr_allocator.go file.
- Adds cloudCIDRAllocator, which is used when the cloud provider allocates
the CIDR ranges externally. (GCE support only)
- Updates RBAC permission for node controller to include PATCH
```
If CIDRAllocatorType is set to `CloudCIDRAllocator`, then allocation
of CIDR allocation instead is done by the external cloud provider and
the node controller is only responsible for reflecting the allocation
into the node spec.
- Splits off the rangeAllocator from the cidr_allocator.go file.
- Adds cloudCIDRAllocator, which is used when the cloud provider allocates
the CIDR ranges externally. (GCE support only)
- Updates RBAC permission for node controller to include PATCH
Automatic merge from submit-queue (batch tested with PRs 43545, 44293, 44221, 43888)
Remove credentials on worker nodes for vSphere cloud provider.
**What this PR does / why we need it**:
Remove the dependency of login information on worker nodes for vsphere cloud provider:
1. VM Name is required to be set in the cloud provider configuration file.
2. Remove the requirement of login for Instance functions when querying local node information.
**Which issue this PR fixes** : fixes #https://github.com/kubernetes/kubernetes/issues/35339
**Release note**:
Automatic merge from submit-queue
Specify subnetid and routetableid via cloud provider config
**What this PR does / why we need it**:
This is a fix for https://github.com/kubernetes/kubernetes/pull/39996 which is needed since 1.6
Changes introduced from 1.6 broke partially(LoadBalancer) the support for running the master components in a different environment (different aws account/on premise). This PR will add support for specifying the Subnet & RouteTable to use via the cloud provider config.
**Release note**:
```release-note
AWS cloud provider: fix support running the master with a different AWS account or even on a different cloud provider than the nodes.
```
Automatic merge from submit-queue (batch tested with PRs 43925, 42512)
AWS: add KubernetesClusterID as additional option when VPC is set
This is a small enhancement after the PRs https://github.com/kubernetes/kubernetes/pull/41695 and https://github.com/kubernetes/kubernetes/pull/39996
## Release Notes
```release-note
AWS cloud provider: allow to set KubernetesClusterID or KubernetesClusterTag in combination with VPC.
```
Automatic merge from submit-queue
Implement API usage metrics for gce storage
**What this PR does / why we need it**:
This PR implements support for emitting metrics from GCE about storage operations.
**Which issue this PR fixes**
Fixes https://github.com/kubernetes/features/issues/182
**Release note**:
```
Add support for emitting metrics from GCE cloudprovider about storage operations.
```
Automatic merge from submit-queue (batch tested with PRs 42617, 43247, 43509, 43644, 43820)
[GCE] Support legacy-https and generic health checks
**What this PR does / why we need it**:
- Adds CRUD functions to manage `compute.HttpsHealthChecks`
The legacy HTTPS healthchecks will be used by the GLBC (GCE Load balancer Controller)
- Adds CRUD functions to manage `compute.HealthChecks`
These are required for the internal load balancer
- Removes the logic that disregards NotFound errors on DeleteHttpHealthChecks as this is useful information for callers. Here are the three known invocations within kubernetes:
[gce/gce_loadbalancer.go#L457](bc6e77d42f/pkg/cloudprovider/providers/gce/gce_loadbalancer.go (L457)): Only prints warning that HC wasn't deleted -> acceptable
[gce/gce_loadbalancer.go#L465](bc6e77d42f/pkg/cloudprovider/providers/gce/gce_loadbalancer.go (L465)): Err is ignored if not nil -> acceptable
[e2e/framework/ingress_utils.go#L530](bc6e77d42f/test/e2e/framework/ingress_utils.go (L530)): Already checks if is NotFound error -> acceptable
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
Step one of https://github.com/kubernetes/ingress/issues/494
Step one of #33483
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Remove the dependency of login information on worker nodes for vsphere cloud provider:
1. VM Name is required to be set in the cloud provider configuration file.
2. Remove the requirement of login for Instance functions when querying local node information.
The cloudprovider is being refactored out of kubernetes core. This is being
done by moving all the cloud-specific calls from kube-apiserver, kubelet and
kube-controller-manager into a separately maintained binary(by vendors) called
cloud-controller-manager. The Kubelet relies on the cloudprovider to detect information
about the node that it is running on. Some of the cloudproviders worked by
querying local information to obtain this information. In the new world of things,
local information cannot be relied on, since cloud-controller-manager will not
run on every node. Only one active instance of it will be run in the cluster.
Today, all calls to the cloudprovider are based on the nodename. Nodenames are
unqiue within the kubernetes cluster, but generally not unique within the cloud.
This model of addressing nodes by nodename will not work in the future because
local services cannot be queried to uniquely identify a node in the cloud. Therefore,
I propose that we perform all cloudprovider calls based on ProviderID. This ID is
a unique identifier for identifying a node on an external database (such as
the instanceID in aws cloud).
Automatic merge from submit-queue (batch tested with PRs 42835, 42974)
VSAN policy support for storage volume provisioning inside kubernetes
The vsphere users will have the ability to specify custom Virtual SAN Storage Capabilities during dynamic volume provisioning. You can now define storage requirements, such as performance and availability, in the form of storage capabilities during dynamic volume provisioning. The storage capability requirements are converted into a Virtual SAN policy which are then pushed down to the Virtual SAN layer when a storage volume (virtual disk) is being created. The virtual disk is distributed across the Virtual SAN datastore to meet the requirements.
For example, User creates a storage class with VSAN storage capabilities:
> kind: StorageClass
> apiVersion: storage.k8s.io/v1beta1
> metadata:
> name: slow
> provisioner: kubernetes.io/vsphere-volume
> parameters:
> hostFailuresToTolerate: "2"
> diskStripes: "1"
> cacheReservation: "20"
> datastore: VSANDatastore
The vSphere Cloud provider provisions a virtual disk (VMDK) on VSAN with the policy configured to the disk.
When you know storage requirements of your application that is being deployed on a container, you can specify these storage capabilities when you create a storage class inside Kubernetes.
@pdhamdhere @tthole @abrarshivani @divyenpatel
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 43681, 40423, 43562, 43008, 43381)
Openstack cinder v1/v2/auto API support
**What this PR does / why we need it**:
It adds support for v2 cinder API + autodetection of available cinder API level (as in LBs).
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#39572
**Special notes for your reviewer**:
Based on work by @anguslees. The first two commits are just rebased from https://github.com/kubernetes/kubernetes/pull/36344 which already had a lgtm by @jbeda
**Release note**:
```
Add support for v2 cinder API for openstack cloud provider. By default it autodetects the available version.
```
Automatic merge from submit-queue (batch tested with PRs 43642, 43170, 41813, 42170, 41581)
Enable storage class support in Azure File volume
**What this PR does / why we need it**:
Support StorageClass in Azure file volume
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Support StorageClass in Azure file volume
```
Automatic merge from submit-queue (batch tested with PRs 42237, 42297, 42279, 42436, 42551)
should replace errors.New(fmt.Sprintf(...)) with fmt.Errorf(...)
Signed-off-by: yupengzte <yu.peng36@zte.com.cn>
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 41139, 41186, 38882, 37698, 42034)
Add support for bring-your-own ip address for Services on Azure
@colemickens @codablock
Automatic merge from submit-queue
Add approvers to the aws OWNERS file
Without this it was picking up reviewers from a much higher directory.
```release-note
NONE
```
Each major interface is now in its own file. Any package private
functions that are only referenced by a particular module was also moved
to the corresponding file. All common helper functions were moved to
gce_util.go.
This change is a pure movement of code; no semantic changes were made.
Automatic merge from submit-queue
Fix adding disks to more than one scsi adapter. Fixes#42399
**What this PR does / why we need it**: Allows a single node to use more than 16 disks.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#42399
**Special notes for your reviewer**:
**Release note**:
```release-note
Fix adding disks to more than one scsi adapter.
```
Automatic merge from submit-queue (batch tested with PRs 42452, 43399)
Modify getInstanceByName to avoid calling getInstancesByNames
This PR modify getInstanceByname to loop through all management zones
directly instead of calling getInstancesByNames. Currently
getInstancesByNames use a node name prefix as a filter to list the
instances. If the prefix does not match, it will return all instances
which is very wasteful since getInstanceByName only query one instance
with a specific name.
Partially fix issue #42445