Commit Graph

68583 Commits (62745905188c6b5e0d3ae4f1cc1d20ca58d27a87)

Author SHA1 Message Date
Christoph Blecker 1c5b968152
Bump golang.org/x/crypto dep 2018-08-08 21:01:29 -07:00
Kubernetes Submit Queue c343fa4937
Merge pull request #66917 from dougm/cloud-doc
Automatic merge from submit-queue (batch tested with PRs 67026, 62945, 66917). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Cloud Provider Zones doc fixups

**What this PR does / why we need it**:

A few godoc fixups for Cloud Provider Zones.

```release-note
NONE
```
2018-08-08 20:53:06 -07:00
Kubernetes Submit Queue bd0de223da
Merge pull request #62945 from nak3/all-resource-create-role
Automatic merge from submit-queue (batch tested with PRs 67026, 62945, 66917). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

`kubectl create {clusterrole,role}`'s `--resources` flag support asterisk to specify all resources

**What this PR does / why we need it**:

Currently `kubectl create (cluster)role`'s `--resources` flag does not support asterisk to specify all resources.

```
# kubectl create clusterrole superrole --verb=get  --resource=*
the server doesn't have a resource type "*"
```

As an user, we create a role with `--resources=*` sometimes, so this PR supports it.

Fixes https://github.com/kubernetes/kubernetes/issues/62989

**Special notes for your reviewer**:

- This patch does not support `--resource=*` for `SpecialVerbs` - e.g `kubectl create role foo --verb=impersonate  --resource=*`, because current code also does not support `kubectl create role foo --verb=impersonate  --resource=users,pods`

**Release note**:

```release-note
`kubectl create {clusterrole,role}`'s `--resources` flag supports asterisk to specify all resources.
```
2018-08-08 20:53:02 -07:00
Kubernetes Submit Queue 508e8bcd84
Merge pull request #67026 from satyasm/upgrade_debian_base
Automatic merge from submit-queue (batch tested with PRs 67026, 62945, 66917). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Upgrade debian-base to 0.3.1 for CVEs

**What this PR does / why we need it**:
Upgrade debian-base to 0.3.1 in response to CVE fixes in debian-base

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:
Bumps up the version number of related components.

**Release note**:

```release-note
Bump up version number of debian-base, debian-hyperkube-base and debian-iptables. 
Also updates dependencies of users of debian-base. 
debian-base version 0.3.1 is already available.
```
2018-08-08 20:52:59 -07:00
Christoph Blecker e6d21f16d9
Require vendoring of cfssl binaries 2018-08-08 20:52:28 -07:00
Stephen Augustus ac920453ff Update `pkg/cloudprovider/providers/azure/OWNERS`
* Remove Jaice
* Remove Cole
* Add Stephen as reviewer

Signed-off-by: Stephen Augustus <foo@agst.us>
2018-08-08 23:32:18 -04:00
yue9944882 bc1fb1f7e8 node authz/ad externalization 2018-08-09 10:57:30 +08:00
Pengfei Ni 7962954053 Parse zoned first before using it 2018-08-09 10:23:53 +08:00
Kubernetes Submit Queue 93c990d708
Merge pull request #67035 from dims/multi-arch-images-for-echoserver
Automatic merge from submit-queue (batch tested with PRs 66987, 67035). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Multi-arch images for echoserver

Originally from:
https://github.com/kubernetes/ingress-nginx/tree/master/images/echoheaders

Moving the code here to prevent bit-rot and to be sure we can recreate
or update the images on demand. Moving it here also ensures we can use
the common harness to build the multi-arch manifests needed for running
the e2e test that use this container.

Change-Id: I15009268da4e7809a1c03d9af3181b585afa8139



**What this PR does / why we need it**:

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2018-08-08 19:16:08 -07:00
Kubernetes Submit Queue 0a122a65c5
Merge pull request #66987 from mkumatag/volume_multiarch
Automatic merge from submit-queue (batch tested with PRs 66987, 67035). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Multiarch manifest for volume-tester docker images

**What this PR does / why we need it**:

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes https://github.com/kubernetes/kubernetes/issues/48376

**Special notes for your reviewer**:
@dims @luxas 

Changes made:
- Removed the ceph folder which is not used anymore and merged into rbd image
- Converted following images multi-arch:
```
volume/gluster
volume/iscsi
volume/nfs
volume/rbd
```

**Release note**:

```release-note
NONE
```
2018-08-08 19:16:05 -07:00
Claudiu Belu 4ed859c307 tests: Skips AfterEach step if provider is not supported
The BeforeEach step for cluster_size_autoscaling is skipped if
the provider is not gce or gke. The AfterEach step should also
be skipped, since nothing was done.
2018-08-08 17:27:56 -07:00
Davanum Srinivas 6d9035762d
Multi-arch images for metadata-concealment check container
Originally from:
https://github.com/GoogleCloudPlatform/k8s-metadata-proxy/tree/master/test

Moving the code here to prevent bit-rot and to be sure we can recreate
or update the images on demand. Moving it here also ensures we can use
the common harness to build the multi-arch manifests needed for running
the metadata concealment e2e test can run on multiple architectures.

Change-Id: I15009268da4e7809a1c03d9af3181b585afa8139
2018-08-08 20:11:10 -04:00
Satyadeep Musuvathy 025a0b3bf3 Upgrade debian-base to 0.3.1 for CVEs 2018-08-08 16:50:10 -07:00
Kenjiro Nakayama 9cb24c4680 `kubectl create {clusterrole,role}`'s `--resources` flag support asterisk to specify all resources 2018-08-09 08:40:12 +09:00
Kubernetes Submit Queue 8f92b8e288
Merge pull request #67148 from yujuhong/add-gci-owner
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

GCE: Add OWNERS for image (gci) configuration

**What this PR does / why we need it**:

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2018-08-08 16:35:55 -07:00
Kubernetes Submit Queue a205089cff
Merge pull request #67149 from luxas/clientconfig_kubeconfig
Automatic merge from submit-queue (batch tested with PRs 67061, 66589, 67121, 67149). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Rename KubeConfigFile to Kubeconfig in ClientConnectionConfiguration

**What this PR does / why we need it**:
As discussed with @liggitt we should make the field name and JSON tag consistent, and we concluded `Kubeconfig` and `kubeconfig` is the most consistent naming we have (e.g. wrt `--kubeconfig`), so we're going with that naming for the `ClientConnectionConfiguration` struct. Also, this preserves backwards-compat wrt existing serialized configuration. This fixes the API violation.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
ref: https://github.com/kubernetes/community/pull/2354

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
/assign  @liggitt @sttts
2018-08-08 16:32:14 -07:00
Kubernetes Submit Queue ae351f1184
Merge pull request #67121 from feiskyer/azdisk-affinity
Automatic merge from submit-queue (batch tested with PRs 67061, 66589, 67121, 67149). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add DynamicProvisioningScheduling and VolumeScheduling support for Azure managed disks

**What this PR does / why we need it**:

Continue of [Azure Availability Zone feature](https://github.com/kubernetes/features/issues/586).

This PR adds `VolumeScheduling` and `DynamicProvisioningScheduling` support to Azure managed disks.

When feature gate `VolumeScheduling` disabled, no NodeAffinity set for PV:

```yaml
kubectl describe pv
Name:              pvc-d30dad05-9ad8-11e8-94f2-000d3a07de8c
Labels:            failure-domain.beta.kubernetes.io/region=southeastasia
                   failure-domain.beta.kubernetes.io/zone=southeastasia-2
Annotations:       pv.kubernetes.io/bound-by-controller=yes
                   pv.kubernetes.io/provisioned-by=kubernetes.io/azure-disk
                   volumehelper.VolumeDynamicallyCreatedByKey=azure-disk-dynamic-provisioner
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      default
Status:            Bound
Claim:             default/pvc-azuredisk
Reclaim Policy:    Delete
Access Modes:      RWO
Capacity:          5Gi
Node Affinity:
  Required Terms:
    Term 0:        failure-domain.beta.kubernetes.io/region in [southeastasia]
                   failure-domain.beta.kubernetes.io/zone in [southeastasia-2]
Message:
Source:
    Type:         AzureDisk (an Azure Data Disk mount on the host and bind mount to the pod)
    DiskName:     k8s-5b3d7b8f-dynamic-pvc-d30dad05-9ad8-11e8-94f2-000d3a07de8c
    DiskURI:      /subscriptions/<subscription-id>/resourceGroups/<rg-name>/providers/Microsoft.Compute/disks/k8s-5b3d7b8f-dynamic-pvc-d30dad05-9ad8-11e8-94f2-000d3a07de8c
    Kind:         Managed
    FSType:
    CachingMode:  None
    ReadOnly:     false
Events:           <none>
```

When feature gate `VolumeScheduling` enabled, NodeAffinity will be populated for PV:

```yaml
kubectl describe pv
Name:              pvc-0284337b-9ada-11e8-a7f6-000d3a07de8c
Labels:            failure-domain.beta.kubernetes.io/region=southeastasia
                   failure-domain.beta.kubernetes.io/zone=southeastasia-2
Annotations:       pv.kubernetes.io/bound-by-controller=yes
                   pv.kubernetes.io/provisioned-by=kubernetes.io/azure-disk
                   volumehelper.VolumeDynamicallyCreatedByKey=azure-disk-dynamic-provisioner
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      default
Status:            Bound
Claim:             default/pvc-azuredisk
Reclaim Policy:    Delete
Access Modes:      RWO
Capacity:          5Gi
Node Affinity:
  Required Terms:
    Term 0:        failure-domain.beta.kubernetes.io/region in [southeastasia]
                   failure-domain.beta.kubernetes.io/zone in [southeastasia-2]
Message:
Source:
    Type:         AzureDisk (an Azure Data Disk mount on the host and bind mount to the pod)
    DiskName:     k8s-5b3d7b8f-dynamic-pvc-0284337b-9ada-11e8-a7f6-000d3a07de8c
    DiskURI:      /subscriptions/<subscription-id>/resourceGroups/<rg-name>/providers/Microsoft.Compute/disks/k8s-5b3d7b8f-dynamic-pvc-0284337b-9ada-11e8-a7f6-000d3a07de8c
    Kind:         Managed
    FSType:
    CachingMode:  None
    ReadOnly:     false
Events:           <none>
```

When both  `VolumeScheduling` and `DynamicProvisioningScheduling` are enabled, storage class also supports `allowedTopologies` and `volumeBindingMode: WaitForFirstConsumer` for volume topology aware dynamic provisioning:

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
  name: managed-disk-dynamic
parameters:
  cachingmode: None
  kind: Managed
  storageaccounttype: Standard_LRS
provisioner: kubernetes.io/azure-disk
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
  - key: failure-domain.beta.kubernetes.io/zone
    values:
    - southeastasia-2
    - southeastasia-1
```

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
DynamicProvisioningScheduling and VolumeScheduling is not supported for Azure managed disks. Feature gates DynamicProvisioningScheduling and VolumeScheduling should be enabled before using this feature.
```

/kind feature
/sig azure
/cc @brendandburns @khenidak @andyzhangx
/cc @ddebroy @msau42 @justaugustus
2018-08-08 16:32:10 -07:00
Kubernetes Submit Queue dd4ab76f05
Merge pull request #66589 from MorrisLaw/get_load_balancer_name_per_provider
Automatic merge from submit-queue (batch tested with PRs 67061, 66589, 67121, 67149). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Get load balancer name per provider

**What this PR does / why we need it**:
GetLoadBalancerName() should be implemented per cloud provider as opposed to one neutral implementation.

This PR will address this by moving `cloudprovider.GetLoadBalancerName()` to the `LoadBalancer interface` and then provide an implementation for each cloud provider, while maintaining previously expected functionality.

**Which issue(s) this PR fixes**:
Fixes  [#43173](https://github.com/kubernetes/kubernetes/issues/43173)

**Special notes for your reviewer**:
This is a work in progress. Looking for feedback as I work on this, from any interested parties.

**Release note**:

```release-note
NONE
```
2018-08-08 16:32:07 -07:00
Kubernetes Submit Queue 8a47559203
Merge pull request #67061 from Random-Liu/fix-docker-registry
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix docker registry used in e2e test.

See https://github.com/kubernetes/kubernetes/pull/66055#issuecomment-410947650.

Fix docker registry used in e2e test, so that it works with all container runtimes.

**Release note**:

```release-note
none
```

/cc @kubernetes/sig-node-pr-reviews @kubernetes/sig-testing-pr-reviews
2018-08-08 15:54:30 -07:00
Sean Sullivan ff6113dfc8 Removes dependency on RBAC within kubernetes core 2018-08-08 13:58:35 -07:00
Morten Torkildsen a93ea43e15 Fix to handle hash collisions correctly for DaemonSet 2018-08-08 13:43:43 -07:00
Davanum Srinivas 6ac597062a
Remove the local manifest list after push
Manifests seem sticky in docker, so let's try to purge so if
we have re-push a fresh set of containers (with same version number as
before) during testing, the manifests are created fresh.

Change-Id: I41c010c08bd50b68ff6973a4ae1e004824fab178
2018-08-08 16:28:19 -04:00
Kubernetes Submit Queue 06bac1880d
Merge pull request #67108 from yujuhong/crictl-test
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

GCE: add a crictl test

This verifies that crictl is available on the node.



**What this PR does / why we need it**:

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2018-08-08 13:22:17 -07:00
Cheng Xing 7fa120c18c CSI plugin now calls NodeGetInfo() to get driver's node ID 2018-08-08 13:15:43 -07:00
juanvallejo d5651948cf
improve kubeconfig file modification time
Trades runtime complexity for spacial complexity when modifying
large amounts of contexts on a kubeconfig.

In cases where there are few destination filenames for a given
amount of contexts, but a large amount of contexts, this patch
prevents reading and writing to the same file (or small number
of files) over and over again needlessly.
2018-08-08 16:13:03 -04:00
Lucas Käldström 2ff9bd6699
Rename the KubeConfigFile field to Kubeconfig in ClientConnectionConfiguration 2018-08-08 22:25:55 +03:00
Davanum Srinivas a2d94d9a3f
Multi-arch images for echoserver
Originally from:
https://github.com/kubernetes/ingress-nginx/tree/master/images/echoheaders

Moving the code here to prevent bit-rot and to be sure we can recreate
or update the images on demand. Moving it here also ensures we can use
the common harness to build the multi-arch manifests needed for running
the e2e test that use this container.

Change-Id: I15009268da4e7809a1c03d9af3181b585afa8139
2018-08-08 15:20:31 -04:00
Yu-Ju Hong ae6a76a47f GCE: Add OWNERS for image (gci) configuration 2018-08-08 12:08:05 -07:00
Lantao Liu e232c4fe26 Fix docker registry used in e2e test. 2018-08-08 10:35:58 -07:00
Yu-Ju Hong 0bd9d47c16 GCE: add a crictl test
This verifies that crictl is available on the node.
2018-08-08 10:25:17 -07:00
Kubernetes Submit Queue 652cebcba5
Merge pull request #67117 from xiangpengzhao/check-cfgpath
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Check config path for command "kubeadm alpha phase kubelet write-env-file"

**What this PR does / why we need it**:
Explicitly check the `--config` flag of command `kubeadm alpha phase kubelet write-env-file`.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes https://github.com/kubernetes/kubeadm/issues/1043

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2018-08-08 10:23:03 -07:00
Kubernetes Submit Queue 69ae314442
Merge pull request #67030 from dims/multi-arch-images-for-apparmor-loader
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Multi-arch images for apparmor-loader container

**What this PR does / why we need it**:

Originally from:
https://github.com/kubernetes/contrib/tree/master/apparmor/loader

Moving the code here to prevent bit-rot and to be sure we can recreate
or update the images on demand. Moving it here also ensures we can use
the common harness to build the multi-arch manifests needed for running
the apparmor e2e test can run on multiple architectures.

Change-Id: Idece17c494fc944c0aaef64805d2f0e3c4d7fb28

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2018-08-08 09:04:26 -07:00
Kubernetes Submit Queue e38efdcce6
Merge pull request #66698 from WanLinghao/token_projected_improve
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

refuse serviceaccount projection volume request when pod has no servceaccount bounded

**What this PR does / why we need it**:
Currently, if user starts a cluster with ServiceAccount admission plugin disabled, then creates a Pod 
like this:
```
kind: Pod 
apiVersion: v1
metadata:
  labels:
    run: nginx
  name: busybox2
spec:
      containers:
      - image: gcr.io/google-containers/nginx
        name: nginx
        volumeMounts:
        - mountPath: /var/run/secrets/tokens
          name: token
      - image: ubuntu
        name: ttt 
        volumeMounts:
        - mountPath: /var/run/secrets/tokens
          name: token
        command: [ "/bin/bash", "-c", "--" ]
        args: [ "while true; do sleep 30; done;" ]
      volumes:
      - name: token
        projected:
          sources:
          - serviceAccountToken:
              path: tokenPath
              expirationSeconds: 6000
              audience: gakki-audiences
```
The pod creation will fail with error info like:
Events:
```
  Type     Reason       Age               From                Message
  ----     ------       ----              ----                -------
  Normal   Scheduled    23s               default-scheduler   Successfully assigned office/busybox2 to 127.0.0.1
  Warning  FailedMount  8s (x6 over 23s)  kubelet, 127.0.0.1  MountVolume.SetUp failed for volume "token" : failed to fetch token: resource name may not be empty
```
We should refuse the projection request earlier. This patch fix this.


**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2018-08-08 07:46:17 -07:00
Kubernetes Submit Queue 28d649c2f5
Merge pull request #66932 from nilebox/discovery-include-unavailable
Automatic merge from submit-queue (batch tested with PRs 66394, 66888, 66932). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Include unavailable apiservices in discovery response

**What this PR does / why we need it**:
Include unavailable apiservices into `apis/` discovery endpoint response to fix namespace deletion https://github.com/kubernetes-incubator/service-catalog/issues/2254

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes https://github.com/kubernetes-incubator/service-catalog/issues/2254

**Special notes for your reviewer**:

**Release note**:


```release-note
kube-apiserver now includes all registered API groups in discovery, including registered extension API group/versions for unavailable extension API servers.
```
2018-08-08 07:00:14 -07:00
Kubernetes Submit Queue 15c2dd906e
Merge pull request #66888 from yue9944882/refactor/promote-informers-into-master-cfg
Automatic merge from submit-queue (batch tested with PRs 66394, 66888, 66932). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Promote internal/external informers into master.Config

**Release note**:

```release-note
NONE
```
xref #66386

Shorten `BuildGenericConfig`'s return list. 
Put the internal and external informers into master.Config. Previous art:


[60614d5cdc/staging/src/k8s.io/apiserver/pkg/server/config.go (L196))
2018-08-08 07:00:08 -07:00
Kubernetes Submit Queue 446eef54c5
Merge pull request #66394 from rtripat/i-65724
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Support pulling requestheader CA from extension-apiserver-authentication ConfigMap without client CA

This commit prevents extension API server from erroring out during bootstrap when the core
API server doesn't support certificate based authentication for it's clients i.e. client-ca isn't
present in extension-apiserver-authentication ConfigMap in kube-system.

This can happen in cluster setups where core API server uses Webhook token authentication.

Fixes: https://github.com/kubernetes/kubernetes/issues/65724

**Which issue(s) this PR fixes** 
Fixes #65724

**Special notes for your reviewer**:

**Release note**:
```release-note
Allows extension API server to dynamically discover the requestheader CA certificate when the core API server doesn't use certificate based authentication for it's clients
```
2018-08-08 06:30:27 -07:00
Kubernetes Submit Queue 6800f9e144
Merge pull request #66653 from hzxuzhonghu/dynamic-watch
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

add watch integration test for dynamic client

**What this PR does / why we need it**:

Add watch to dynamic client integration test


**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2018-08-08 05:12:11 -07:00
Davanum Srinivas c898ced795
Multi-arch images for apparmor-loader container
Originally from:
https://github.com/kubernetes/contrib/tree/master/apparmor/loader

Moving the code here to prevent bit-rot and to be sure we can recreate
or update the images on demand. Moving it here also ensures we can use
the common harness to build the multi-arch manifests needed for running
the apparmor e2e test can run on multiple architectures.

Change-Id: Idece17c494fc944c0aaef64805d2f0e3c4d7fb28
2018-08-08 06:04:35 -04:00
Tripathi db828a4440 Support pulling requestheader CA from extension-apiserver-authentication ConfigMap without client CA
This commit prevents extension API server from erroring out during bootstrap when the core
API server doesn't support certificate based authentication for it's clients i.e. client-ca isn't
present in extension-apiserver-authentication ConfigMap in kube-system.

This can happen in cluster setups where core API server uses Webhook token authentication.

Fixes: https://github.com/kubernetes/kubernetes/issues/65724
2018-08-08 02:48:49 -07:00
Kubernetes Submit Queue 28b2b21287
Merge pull request #65891 from CaoShuFeng/audit_v1_stable
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

upgrade Audit api version to stable

Partial Fix: https://github.com/kubernetes/kubernetes/issues/65266

TODO:
    use v1 version of advanced audit policy in [kubeadm](86b9a53226/cmd/kubeadm/app/util/audit/utils.go (L29)), [gce script](86b9a53226/cluster/gce/gci/configure-helper.sh (L743)), [kubemark](86b9a53226/test/kubemark/resources/start-kubemark-master.sh (L349))



**What this PR does / why we need it**:

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
audit.k8s.io api group is upgraded from v1beta1 to v1.
Deprecated element metav1.ObjectMeta and Timestamp are removed from audit Events in v1 version.
Default value of option --audit-webhook-version and --audit-log-version will be changed from `audit.k8s.io/v1beta1` to `audit.k8s.io/v1` in release 1.13
```
2018-08-08 02:17:24 -07:00
Pengfei Ni 30fe79d63f Add DynamicProvisioningScheduling and VolumeScheduling support for AzureDisk 2018-08-08 17:05:46 +08:00
xiangpengzhao 3f2c7b6fda Check config path for command "kubeadm alpha phase kubelet write-env-file" 2018-08-08 14:29:53 +08:00
Ben Swartzlander 39e52ddae5 Add wait loop for multipath devices to appear
It takes a variable amount of time for the multipath daemon
to create /dev/dm-XX in response to new LUNs being discovered.
The old iscsi_util code only discovered the multipath device
if it was created quickly enough, but in a significant number
of cases, kubelet would grab one of the individual paths and
put a filesystem it on before multipathd could construct a
multipath device.

This change waits for the multipath device to get created for
up to 10 seconds, but only if the PV actually had more than
one portal.
2018-08-08 00:44:45 -04:00
Nail Islamov d4690f4aec
Include unavailable API services in discovery response 2018-08-08 07:26:27 +03:00
xiangpengzhao 610ca1f60c Auto generated BUILD files. 2018-08-08 12:06:34 +08:00
Kubernetes Submit Queue 81c6b735fa
Merge pull request #67084 from spiffxp/rm-conformance-from-e2e_node
Automatic merge from submit-queue (batch tested with PRs 63572, 67084). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Remove [Conformance] from tests in test/e2e_node

Conformance tests live inside of test/e2e, none of the tests currently
tagged as `[Conformance]` in test/e2e_node actually get run when you run
the conformance tests with e2e.test (either directly or indirectly with
sonobuoy)

If these tests make sense as both `[NodeConformance]` and `[Conformance]`
tests, they should be ported to test/e2e/common

/kind cleanup
/area conformance
/sig architecture
/cc @bgrant0607 @smarterclayton @timothysc
/sig node
/cc @yujuhong

ref: https://github.com/kubernetes/kubernetes/issues/66875

```release-note
NONE
```
2018-08-07 21:06:04 -07:00
xiangpengzhao 5cf9291e02 Set kubeadm version as the default version in phase command. 2018-08-08 12:05:36 +08:00
Kubernetes Submit Queue fc89a80934
Merge pull request #63572 from haz-mat/aws-lb-sg-scope-icmp
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

scope AWS LoadBalancer security group ICMP rules to spec.loadBalancerSourceRanges

/sig aws
**What this PR does / why we need it**:
Make the client CIDR ranges for MTU consistent with what [the documentation appears to describe](https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer), where the ranges should be equal to `spec.loadBalancerSourceRanges` if supplied.

**Which issue(s) this PR fixes**:
Fixes #63564

**Release note**:
```release-note
scope AWS LoadBalancer security group ICMP rules to spec.loadBalancerSourceRanges
```
2018-08-07 20:49:15 -07:00
WanLinghao 5a27ee9282 refuse serviceaccount projection volume request when pod has no serviceaccount bounded 2018-08-08 10:29:07 +08:00
Anago GCB 2fa93a94c5 Update CHANGELOG-1.11.md for v1.11.2. 2018-08-08 02:22:28 +00:00