Individual implementations are not yet being moved.
Fixed all dependencies which call the interface.
Fixed golint exceptions to reflect the move.
Added project info as per @dims and
https://github.com/kubernetes/kubernetes-template-project.
Added dims to the security contacts.
Fixed minor issues.
Added missing template files.
Copied ControllerClientBuilder interface to cp.
This allows us to break the only dependency on K8s/K8s.
Added TODO to ControllerClientBuilder.
Fixed GoDeps.
Factored in feedback from JustinSB.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
implement InstanceShutdownByProviderID for azure
**What this PR does / why we need it**: implements #66265
**Which issue(s) this PR fixes**: Fixes#66265
**Special notes for your reviewer**:
**Release note**:
```release-note
Support NodeShutdown taint for azure
```
Automatic merge from submit-queue (batch tested with PRs 67986, 68210, 67817). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Fix panic when processing http response
**What this PR does / why we need it**:
When Azure ARM API gets something wrong, kube-controller-manager may panic because of azure cloud provider:
```
/usr/local/go/src/runtime/asm_amd64.s:2361
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1d4cad9]
goroutine 1386 [running]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x107
panic(0x44468c0, 0x8b76a30)
/usr/local/go/src/runtime/panic.go:502 +0x229
k8s.io/kubernetes/pkg/cloudprovider/providers/azure.processHTTPRetryResponse(0x0, 0x64ffec0, 0xc4229fd1f0, 0xc422ed05b0, 0x2, 0x2)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/cloudprovider/providers/azure/azure_backoff.go:364 +0x69
k8s.io/kubernetes/pkg/cloudprovider/providers/azure.(*Cloud).CreateOrUpdatePIPWithRetry.func1(0xc422ed0600, 0x0, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/cloudprovider/providers/azure/azure_backoff.go:205 +0x298
```
This PR fixes that.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#68209
**Special notes for your reviewer**:
Should cherry pick to old releases.
**Release note**:
```release-note
Fix panic when processing Azure HTTP response.
```
Automatic merge from submit-queue (batch tested with PRs 67986, 68210, 67817). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
add mixed protocol support for azure load balancer
**What this PR does / why we need it**:
If user specify `service.beta.kubernetes.io/azure-load-balancer-mixed-protocols: "true"`, azure cloud provider will create both TCP and UDP lb rules, for more details, could refer to https://github.com/kubernetes/kubernetes/issues/66887
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#66887
**Special notes for your reviewer**:
original `reconcileLoadBalancer` func is too big, I move part of code implementation to a standalone func `createLoadBalancerRule`
example service config:
```
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-mixed-protocols: "true"
name: web
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: web
sessionAffinity: None
type: LoadBalancer
```
**Release note**:
```
add mixed protocol support for azure load balancer
```
/kind feature
/sig azure
/assign @feiskyer @khenidak
On-prem nodes should register themselves with required labels, e.g.
kubelet --node-labels=alpha.service-controller.kubernetes.io/exclude-balancer=true,kubernetes.azure.com/managed=false ...
Automatic merge from submit-queue (batch tested with PRs 67694, 64973, 67902). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
SCTP support implementation for Kubernetes
**What this PR does / why we need it**: This PR adds SCTP support to Kubernetes, including Service, Endpoint, and NetworkPolicy.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#44485
**Special notes for your reviewer**:
**Release note**:
```release-note
SCTP is now supported as additional protocol (alpha) alongside TCP and UDP in Pod, Service, Endpoint, and NetworkPolicy.
```
Automatic merge from submit-queue (batch tested with PRs 67766, 67642, 67772). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Enable dynamic azure disk volume limits
**What this PR does / why we need it**:
Enable dynamic azure disk volume limits,
This is an azure cloud provider implementation related to feature: [Dynamic Maximum volume count](https://github.com/kubernetes/features/issues/554)
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#66269
**Special notes for your reviewer**:
This PR use `az.VirtualMachineSizesClient.List` to list all vm sizes under region, match vm size with current node size, and then got `MaxDataDiskCount`, the `GetVolumeLimits` happens in kubelet and will return `attachable-volumes-azure-disk` in node status as following example:
```
agentpool-22082114-0
...
allocatable:
attachable-volumes-azure-disk: "8"
cpu: "2"
ephemeral-storage: "28043041951"
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 7034772Ki
pods: "30"
```
**Release note**:
```
Enable dynamic azure disk volume limits
```
/sig azure
/kind feature
The requested Service Protocol is checked against the supported protocols of GCE Internal LB. The supported protocols are TCP and UDP.
SCTP is not supported by OpenStack LBaaS. If SCTP is requested in a Service with type=LoadBalancer, the request is rejected. Comment style is also corrected.
SCTP is not allowed for LoadBalancer Service and for HostPort. Kube-proxy can be configured not to start listening on the host port for SCTP: see the new SCTPUserSpaceNode parameter
changed the vendor github.com/nokia/sctp to github.com/ishidawataru/sctp. I.e. from now on we use the upstream version.
netexec.go compilation fixed. Various test cases fixed
SCTP related conformance tests removed. Netexec's pod definition and Dockerfile are updated to expose the new SCTP port(8082)
SCTP related e2e test cases are removed as the e2e test systems do not support SCTP
sctp related firewall config is removed from cluster/gce/util.sh. Variable name sctp_addr is corrected to sctpAddr in pkg/proxy/ipvs/proxier.go
cluster/gce/util.sh is copied from master
Automatic merge from submit-queue (batch tested with PRs 66980, 67604, 67741, 67715). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add support of Azure cross resource group nodes
**What this PR does / why we need it**:
Part of feature [Cross resource group nodes](https://github.com/kubernetes/features/issues/604).
This PR adds support of Azure cross resource group nodes that are labeled with `kubernetes.azure.com/resource-group=<rg-name>` and `alpha.service-controller.kubernetes.io/exclude-balancer=true`
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
See designs [here](https://github.com/kubernetes/community/pull/2479).
**Release note**:
```release-note
Azure cloud provider now supports cross resource group nodes that are labeled with `kubernetes.azure.com/resource-group=<rg-name>` and `alpha.service-controller.kubernetes.io/exclude-balancer=true`
```
/sig azure
/kind feature
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Reduce API calls for Azure instance metadata
**What this PR does / why we need it**:
Azure cloud provider gets a lot of `"Too many requests"` error when getting availability zones, instance types and node addresses. Hence kubelet won't be able to initialize itself sometimes.
This PR reduces such calls and alos switches to json API which is more stable.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes https://github.com/Azure/acs-engine/issues/3681
**Special notes for your reviewer**:
**Release note**:
```release-note
Reduce API calls for Azure instance metadata.
```
cc @ritazh @khenidak @andyzhangx
Automatic merge from submit-queue (batch tested with PRs 66793, 67405, 67068, 67501, 67484). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Updated comment for DefaultLoadBalancerName to provide further context
**What this PR does / why we need it**:
Updates the comment for DefaultLoadBalancerName to provide better context and also as a reminder that it should eventually be removed.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 67061, 66589, 67121, 67149). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add DynamicProvisioningScheduling and VolumeScheduling support for Azure managed disks
**What this PR does / why we need it**:
Continue of [Azure Availability Zone feature](https://github.com/kubernetes/features/issues/586).
This PR adds `VolumeScheduling` and `DynamicProvisioningScheduling` support to Azure managed disks.
When feature gate `VolumeScheduling` disabled, no NodeAffinity set for PV:
```yaml
kubectl describe pv
Name: pvc-d30dad05-9ad8-11e8-94f2-000d3a07de8c
Labels: failure-domain.beta.kubernetes.io/region=southeastasia
failure-domain.beta.kubernetes.io/zone=southeastasia-2
Annotations: pv.kubernetes.io/bound-by-controller=yes
pv.kubernetes.io/provisioned-by=kubernetes.io/azure-disk
volumehelper.VolumeDynamicallyCreatedByKey=azure-disk-dynamic-provisioner
Finalizers: [kubernetes.io/pv-protection]
StorageClass: default
Status: Bound
Claim: default/pvc-azuredisk
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 5Gi
Node Affinity:
Required Terms:
Term 0: failure-domain.beta.kubernetes.io/region in [southeastasia]
failure-domain.beta.kubernetes.io/zone in [southeastasia-2]
Message:
Source:
Type: AzureDisk (an Azure Data Disk mount on the host and bind mount to the pod)
DiskName: k8s-5b3d7b8f-dynamic-pvc-d30dad05-9ad8-11e8-94f2-000d3a07de8c
DiskURI: /subscriptions/<subscription-id>/resourceGroups/<rg-name>/providers/Microsoft.Compute/disks/k8s-5b3d7b8f-dynamic-pvc-d30dad05-9ad8-11e8-94f2-000d3a07de8c
Kind: Managed
FSType:
CachingMode: None
ReadOnly: false
Events: <none>
```
When feature gate `VolumeScheduling` enabled, NodeAffinity will be populated for PV:
```yaml
kubectl describe pv
Name: pvc-0284337b-9ada-11e8-a7f6-000d3a07de8c
Labels: failure-domain.beta.kubernetes.io/region=southeastasia
failure-domain.beta.kubernetes.io/zone=southeastasia-2
Annotations: pv.kubernetes.io/bound-by-controller=yes
pv.kubernetes.io/provisioned-by=kubernetes.io/azure-disk
volumehelper.VolumeDynamicallyCreatedByKey=azure-disk-dynamic-provisioner
Finalizers: [kubernetes.io/pv-protection]
StorageClass: default
Status: Bound
Claim: default/pvc-azuredisk
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 5Gi
Node Affinity:
Required Terms:
Term 0: failure-domain.beta.kubernetes.io/region in [southeastasia]
failure-domain.beta.kubernetes.io/zone in [southeastasia-2]
Message:
Source:
Type: AzureDisk (an Azure Data Disk mount on the host and bind mount to the pod)
DiskName: k8s-5b3d7b8f-dynamic-pvc-0284337b-9ada-11e8-a7f6-000d3a07de8c
DiskURI: /subscriptions/<subscription-id>/resourceGroups/<rg-name>/providers/Microsoft.Compute/disks/k8s-5b3d7b8f-dynamic-pvc-0284337b-9ada-11e8-a7f6-000d3a07de8c
Kind: Managed
FSType:
CachingMode: None
ReadOnly: false
Events: <none>
```
When both `VolumeScheduling` and `DynamicProvisioningScheduling` are enabled, storage class also supports `allowedTopologies` and `volumeBindingMode: WaitForFirstConsumer` for volume topology aware dynamic provisioning:
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
name: managed-disk-dynamic
parameters:
cachingmode: None
kind: Managed
storageaccounttype: Standard_LRS
provisioner: kubernetes.io/azure-disk
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: failure-domain.beta.kubernetes.io/zone
values:
- southeastasia-2
- southeastasia-1
```
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
DynamicProvisioningScheduling and VolumeScheduling is not supported for Azure managed disks. Feature gates DynamicProvisioningScheduling and VolumeScheduling should be enabled before using this feature.
```
/kind feature
/sig azure
/cc @brendandburns @khenidak @andyzhangx
/cc @ddebroy @msau42 @justaugustus
Automatic merge from submit-queue (batch tested with PRs 67061, 66589, 67121, 67149). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Get load balancer name per provider
**What this PR does / why we need it**:
GetLoadBalancerName() should be implemented per cloud provider as opposed to one neutral implementation.
This PR will address this by moving `cloudprovider.GetLoadBalancerName()` to the `LoadBalancer interface` and then provide an implementation for each cloud provider, while maintaining previously expected functionality.
**Which issue(s) this PR fixes**:
Fixes [#43173](https://github.com/kubernetes/kubernetes/issues/43173)
**Special notes for your reviewer**:
This is a work in progress. Looking for feedback as I work on this, from any interested parties.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
use func WaitForCompletionRef replace of deprecated func WaitForCompletion
**What this PR does / why we need it**:
use func WaitForCompletionRef replace of deprecated func WaitForCompletion
```
// WaitForCompletion will return when one of the following conditions is met: the long
// running operation has completed, the provided context is cancelled, or the client's
// polling duration has been exceeded. It will retry failed polling attempts based on
// the retry value defined in the client up to the maximum retry attempts.
// Deprecated: Please use WaitForCompletionRef() instead.
func (f Future) WaitForCompletion(ctx context.Context, client autorest.Client) error {
return f.WaitForCompletionRef(ctx, client)
}
```
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```