Automatic merge from submit-queue (batch tested with PRs 67026, 62945, 66917). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Cloud Provider Zones doc fixups
**What this PR does / why we need it**:
A few godoc fixups for Cloud Provider Zones.
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 67026, 62945, 66917). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
`kubectl create {clusterrole,role}`'s `--resources` flag support asterisk to specify all resources
**What this PR does / why we need it**:
Currently `kubectl create (cluster)role`'s `--resources` flag does not support asterisk to specify all resources.
```
# kubectl create clusterrole superrole --verb=get --resource=*
the server doesn't have a resource type "*"
```
As an user, we create a role with `--resources=*` sometimes, so this PR supports it.
Fixes https://github.com/kubernetes/kubernetes/issues/62989
**Special notes for your reviewer**:
- This patch does not support `--resource=*` for `SpecialVerbs` - e.g `kubectl create role foo --verb=impersonate --resource=*`, because current code also does not support `kubectl create role foo --verb=impersonate --resource=users,pods`
**Release note**:
```release-note
`kubectl create {clusterrole,role}`'s `--resources` flag supports asterisk to specify all resources.
```
Automatic merge from submit-queue (batch tested with PRs 67026, 62945, 66917). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Upgrade debian-base to 0.3.1 for CVEs
**What this PR does / why we need it**:
Upgrade debian-base to 0.3.1 in response to CVE fixes in debian-base
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
Bumps up the version number of related components.
**Release note**:
```release-note
Bump up version number of debian-base, debian-hyperkube-base and debian-iptables.
Also updates dependencies of users of debian-base.
debian-base version 0.3.1 is already available.
```
Automatic merge from submit-queue (batch tested with PRs 66987, 67035). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Multi-arch images for echoserver
Originally from:
https://github.com/kubernetes/ingress-nginx/tree/master/images/echoheaders
Moving the code here to prevent bit-rot and to be sure we can recreate
or update the images on demand. Moving it here also ensures we can use
the common harness to build the multi-arch manifests needed for running
the e2e test that use this container.
Change-Id: I15009268da4e7809a1c03d9af3181b585afa8139
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 66987, 67035). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Multiarch manifest for volume-tester docker images
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes https://github.com/kubernetes/kubernetes/issues/48376
**Special notes for your reviewer**:
@dims @luxas
Changes made:
- Removed the ceph folder which is not used anymore and merged into rbd image
- Converted following images multi-arch:
```
volume/gluster
volume/iscsi
volume/nfs
volume/rbd
```
**Release note**:
```release-note
NONE
```
The BeforeEach step for cluster_size_autoscaling is skipped if
the provider is not gce or gke. The AfterEach step should also
be skipped, since nothing was done.
Originally from:
https://github.com/GoogleCloudPlatform/k8s-metadata-proxy/tree/master/test
Moving the code here to prevent bit-rot and to be sure we can recreate
or update the images on demand. Moving it here also ensures we can use
the common harness to build the multi-arch manifests needed for running
the metadata concealment e2e test can run on multiple architectures.
Change-Id: I15009268da4e7809a1c03d9af3181b585afa8139
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
GCE: Add OWNERS for image (gci) configuration
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 67061, 66589, 67121, 67149). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Rename KubeConfigFile to Kubeconfig in ClientConnectionConfiguration
**What this PR does / why we need it**:
As discussed with @liggitt we should make the field name and JSON tag consistent, and we concluded `Kubeconfig` and `kubeconfig` is the most consistent naming we have (e.g. wrt `--kubeconfig`), so we're going with that naming for the `ClientConnectionConfiguration` struct. Also, this preserves backwards-compat wrt existing serialized configuration. This fixes the API violation.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
ref: https://github.com/kubernetes/community/pull/2354
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
/assign @liggitt @sttts
Automatic merge from submit-queue (batch tested with PRs 67061, 66589, 67121, 67149). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add DynamicProvisioningScheduling and VolumeScheduling support for Azure managed disks
**What this PR does / why we need it**:
Continue of [Azure Availability Zone feature](https://github.com/kubernetes/features/issues/586).
This PR adds `VolumeScheduling` and `DynamicProvisioningScheduling` support to Azure managed disks.
When feature gate `VolumeScheduling` disabled, no NodeAffinity set for PV:
```yaml
kubectl describe pv
Name: pvc-d30dad05-9ad8-11e8-94f2-000d3a07de8c
Labels: failure-domain.beta.kubernetes.io/region=southeastasia
failure-domain.beta.kubernetes.io/zone=southeastasia-2
Annotations: pv.kubernetes.io/bound-by-controller=yes
pv.kubernetes.io/provisioned-by=kubernetes.io/azure-disk
volumehelper.VolumeDynamicallyCreatedByKey=azure-disk-dynamic-provisioner
Finalizers: [kubernetes.io/pv-protection]
StorageClass: default
Status: Bound
Claim: default/pvc-azuredisk
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 5Gi
Node Affinity:
Required Terms:
Term 0: failure-domain.beta.kubernetes.io/region in [southeastasia]
failure-domain.beta.kubernetes.io/zone in [southeastasia-2]
Message:
Source:
Type: AzureDisk (an Azure Data Disk mount on the host and bind mount to the pod)
DiskName: k8s-5b3d7b8f-dynamic-pvc-d30dad05-9ad8-11e8-94f2-000d3a07de8c
DiskURI: /subscriptions/<subscription-id>/resourceGroups/<rg-name>/providers/Microsoft.Compute/disks/k8s-5b3d7b8f-dynamic-pvc-d30dad05-9ad8-11e8-94f2-000d3a07de8c
Kind: Managed
FSType:
CachingMode: None
ReadOnly: false
Events: <none>
```
When feature gate `VolumeScheduling` enabled, NodeAffinity will be populated for PV:
```yaml
kubectl describe pv
Name: pvc-0284337b-9ada-11e8-a7f6-000d3a07de8c
Labels: failure-domain.beta.kubernetes.io/region=southeastasia
failure-domain.beta.kubernetes.io/zone=southeastasia-2
Annotations: pv.kubernetes.io/bound-by-controller=yes
pv.kubernetes.io/provisioned-by=kubernetes.io/azure-disk
volumehelper.VolumeDynamicallyCreatedByKey=azure-disk-dynamic-provisioner
Finalizers: [kubernetes.io/pv-protection]
StorageClass: default
Status: Bound
Claim: default/pvc-azuredisk
Reclaim Policy: Delete
Access Modes: RWO
Capacity: 5Gi
Node Affinity:
Required Terms:
Term 0: failure-domain.beta.kubernetes.io/region in [southeastasia]
failure-domain.beta.kubernetes.io/zone in [southeastasia-2]
Message:
Source:
Type: AzureDisk (an Azure Data Disk mount on the host and bind mount to the pod)
DiskName: k8s-5b3d7b8f-dynamic-pvc-0284337b-9ada-11e8-a7f6-000d3a07de8c
DiskURI: /subscriptions/<subscription-id>/resourceGroups/<rg-name>/providers/Microsoft.Compute/disks/k8s-5b3d7b8f-dynamic-pvc-0284337b-9ada-11e8-a7f6-000d3a07de8c
Kind: Managed
FSType:
CachingMode: None
ReadOnly: false
Events: <none>
```
When both `VolumeScheduling` and `DynamicProvisioningScheduling` are enabled, storage class also supports `allowedTopologies` and `volumeBindingMode: WaitForFirstConsumer` for volume topology aware dynamic provisioning:
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
name: managed-disk-dynamic
parameters:
cachingmode: None
kind: Managed
storageaccounttype: Standard_LRS
provisioner: kubernetes.io/azure-disk
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: failure-domain.beta.kubernetes.io/zone
values:
- southeastasia-2
- southeastasia-1
```
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
DynamicProvisioningScheduling and VolumeScheduling is not supported for Azure managed disks. Feature gates DynamicProvisioningScheduling and VolumeScheduling should be enabled before using this feature.
```
/kind feature
/sig azure
/cc @brendandburns @khenidak @andyzhangx
/cc @ddebroy @msau42 @justaugustus
Automatic merge from submit-queue (batch tested with PRs 67061, 66589, 67121, 67149). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Get load balancer name per provider
**What this PR does / why we need it**:
GetLoadBalancerName() should be implemented per cloud provider as opposed to one neutral implementation.
This PR will address this by moving `cloudprovider.GetLoadBalancerName()` to the `LoadBalancer interface` and then provide an implementation for each cloud provider, while maintaining previously expected functionality.
**Which issue(s) this PR fixes**:
Fixes [#43173](https://github.com/kubernetes/kubernetes/issues/43173)
**Special notes for your reviewer**:
This is a work in progress. Looking for feedback as I work on this, from any interested parties.
**Release note**:
```release-note
NONE
```
Manifests seem sticky in docker, so let's try to purge so if
we have re-push a fresh set of containers (with same version number as
before) during testing, the manifests are created fresh.
Change-Id: I41c010c08bd50b68ff6973a4ae1e004824fab178
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
GCE: add a crictl test
This verifies that crictl is available on the node.
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Trades runtime complexity for spacial complexity when modifying
large amounts of contexts on a kubeconfig.
In cases where there are few destination filenames for a given
amount of contexts, but a large amount of contexts, this patch
prevents reading and writing to the same file (or small number
of files) over and over again needlessly.
Originally from:
https://github.com/kubernetes/ingress-nginx/tree/master/images/echoheaders
Moving the code here to prevent bit-rot and to be sure we can recreate
or update the images on demand. Moving it here also ensures we can use
the common harness to build the multi-arch manifests needed for running
the e2e test that use this container.
Change-Id: I15009268da4e7809a1c03d9af3181b585afa8139
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Check config path for command "kubeadm alpha phase kubelet write-env-file"
**What this PR does / why we need it**:
Explicitly check the `--config` flag of command `kubeadm alpha phase kubelet write-env-file`.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes https://github.com/kubernetes/kubeadm/issues/1043
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Multi-arch images for apparmor-loader container
**What this PR does / why we need it**:
Originally from:
https://github.com/kubernetes/contrib/tree/master/apparmor/loader
Moving the code here to prevent bit-rot and to be sure we can recreate
or update the images on demand. Moving it here also ensures we can use
the common harness to build the multi-arch manifests needed for running
the apparmor e2e test can run on multiple architectures.
Change-Id: Idece17c494fc944c0aaef64805d2f0e3c4d7fb28
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
refuse serviceaccount projection volume request when pod has no servceaccount bounded
**What this PR does / why we need it**:
Currently, if user starts a cluster with ServiceAccount admission plugin disabled, then creates a Pod
like this:
```
kind: Pod
apiVersion: v1
metadata:
labels:
run: nginx
name: busybox2
spec:
containers:
- image: gcr.io/google-containers/nginx
name: nginx
volumeMounts:
- mountPath: /var/run/secrets/tokens
name: token
- image: ubuntu
name: ttt
volumeMounts:
- mountPath: /var/run/secrets/tokens
name: token
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
volumes:
- name: token
projected:
sources:
- serviceAccountToken:
path: tokenPath
expirationSeconds: 6000
audience: gakki-audiences
```
The pod creation will fail with error info like:
Events:
```
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 23s default-scheduler Successfully assigned office/busybox2 to 127.0.0.1
Warning FailedMount 8s (x6 over 23s) kubelet, 127.0.0.1 MountVolume.SetUp failed for volume "token" : failed to fetch token: resource name may not be empty
```
We should refuse the projection request earlier. This patch fix this.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 66394, 66888, 66932). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Include unavailable apiservices in discovery response
**What this PR does / why we need it**:
Include unavailable apiservices into `apis/` discovery endpoint response to fix namespace deletion https://github.com/kubernetes-incubator/service-catalog/issues/2254
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes https://github.com/kubernetes-incubator/service-catalog/issues/2254
**Special notes for your reviewer**:
**Release note**:
```release-note
kube-apiserver now includes all registered API groups in discovery, including registered extension API group/versions for unavailable extension API servers.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Support pulling requestheader CA from extension-apiserver-authentication ConfigMap without client CA
This commit prevents extension API server from erroring out during bootstrap when the core
API server doesn't support certificate based authentication for it's clients i.e. client-ca isn't
present in extension-apiserver-authentication ConfigMap in kube-system.
This can happen in cluster setups where core API server uses Webhook token authentication.
Fixes: https://github.com/kubernetes/kubernetes/issues/65724
**Which issue(s) this PR fixes**
Fixes#65724
**Special notes for your reviewer**:
**Release note**:
```release-note
Allows extension API server to dynamically discover the requestheader CA certificate when the core API server doesn't use certificate based authentication for it's clients
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
add watch integration test for dynamic client
**What this PR does / why we need it**:
Add watch to dynamic client integration test
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Originally from:
https://github.com/kubernetes/contrib/tree/master/apparmor/loader
Moving the code here to prevent bit-rot and to be sure we can recreate
or update the images on demand. Moving it here also ensures we can use
the common harness to build the multi-arch manifests needed for running
the apparmor e2e test can run on multiple architectures.
Change-Id: Idece17c494fc944c0aaef64805d2f0e3c4d7fb28
This commit prevents extension API server from erroring out during bootstrap when the core
API server doesn't support certificate based authentication for it's clients i.e. client-ca isn't
present in extension-apiserver-authentication ConfigMap in kube-system.
This can happen in cluster setups where core API server uses Webhook token authentication.
Fixes: https://github.com/kubernetes/kubernetes/issues/65724
It takes a variable amount of time for the multipath daemon
to create /dev/dm-XX in response to new LUNs being discovered.
The old iscsi_util code only discovered the multipath device
if it was created quickly enough, but in a significant number
of cases, kubelet would grab one of the individual paths and
put a filesystem it on before multipathd could construct a
multipath device.
This change waits for the multipath device to get created for
up to 10 seconds, but only if the PV actually had more than
one portal.
Automatic merge from submit-queue (batch tested with PRs 63572, 67084). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove [Conformance] from tests in test/e2e_node
Conformance tests live inside of test/e2e, none of the tests currently
tagged as `[Conformance]` in test/e2e_node actually get run when you run
the conformance tests with e2e.test (either directly or indirectly with
sonobuoy)
If these tests make sense as both `[NodeConformance]` and `[Conformance]`
tests, they should be ported to test/e2e/common
/kind cleanup
/area conformance
/sig architecture
/cc @bgrant0607 @smarterclayton @timothysc
/sig node
/cc @yujuhong
ref: https://github.com/kubernetes/kubernetes/issues/66875
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
scope AWS LoadBalancer security group ICMP rules to spec.loadBalancerSourceRanges
/sig aws
**What this PR does / why we need it**:
Make the client CIDR ranges for MTU consistent with what [the documentation appears to describe](https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer), where the ranges should be equal to `spec.loadBalancerSourceRanges` if supplied.
**Which issue(s) this PR fixes**:
Fixes#63564
**Release note**:
```release-note
scope AWS LoadBalancer security group ICMP rules to spec.loadBalancerSourceRanges
```