Commit Graph

48812 Commits (b4ddf4720d72952041392341950f6987e908d007)

Author SHA1 Message Date
gmarek ded8e03fc3 Reduce service creation/deletion parallelism in the load test 2017-05-25 11:44:03 +02:00
Michail Kargakis 4a2c5eae92
Implement hash collision avoidance mechanism
Signed-off-by: Michail Kargakis <mkargaki@redhat.com>
2017-05-25 11:17:45 +02:00
Michail Kargakis aeb2d9b9b4
Deep equality helper should not mutate state
Signed-off-by: Michail Kargakis <mkargaki@redhat.com>
2017-05-25 11:17:45 +02:00
Michail Kargakis fcf68ba7a7
Remove obsolete deployment helpers
Signed-off-by: Michail Kargakis <mkargaki@redhat.com>
2017-05-25 11:17:44 +02:00
Michail Kargakis 4aa8b1a66a
Add collisionCount api field in DeploymentStatus
Signed-off-by: Michail Kargakis <mkargaki@redhat.com>
2017-05-25 11:17:44 +02:00
Kubernetes Submit Queue 4def5add11 Merge pull request #46373 from deads2k/controller-06-queue
Automatic merge from submit-queue (batch tested with PRs 45913, 46065, 46352, 46363, 46373)

don't queue namespaces for deletion if the namespace isn't deleted

Most namespaces aren't deleted most of the time.  No need to queue them for cleanup if they aren't deleted.
2017-05-25 00:11:07 -07:00
Kubernetes Submit Queue d84f3f4b7e Merge pull request #46363 from MrHohn/fix-CheckPodsCondition
Automatic merge from submit-queue (batch tested with PRs 45913, 46065, 46352, 46363, 46373)

Fix CheckPodsCondition to print out the correct podName

From a couple CIs (https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/1114, https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-gci-qa-serial-master/2246, https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-pre-release/2187), all indicate we print out the wrong pod name in CheckPodsCondition for _"Pod XXX failed to be running and ready, or succeeded."_:
```
I0524 02:09:50.173] May 24 02:09:50.173: INFO: Waiting for pod heapster-v1.3.0-3806988011-kzkg6 in namespace 'kube-system' status to be 'running and ready, or succeeded'(found phase: "Running", readiness: false) (4m55.033881993s elapsed)
I0524 02:09:52.178] May 24 02:09:52.178: INFO: Waiting for pod heapster-v1.3.0-3806988011-kzkg6 in namespace 'kube-system' status to be 'running and ready, or succeeded'(found phase: "Running", readiness: false) (4m57.03848264s elapsed)
I0524 02:09:54.183] May 24 02:09:54.182: INFO: Waiting for pod heapster-v1.3.0-3806988011-kzkg6 in namespace 'kube-system' status to be 'running and ready, or succeeded'(found phase: "Running", readiness: false) (4m59.043463323s elapsed)
I0524 02:09:56.183] May 24 02:09:56.183: INFO: Pod fluentd-gcp-v2.0-6wf67 failed to be running and ready, or succeeded.
I0524 02:09:56.184] May 24 02:09:56.183: INFO: Wanted all 23 pods to be running and ready, or succeeded. Result: false. Pods: [heapster-v1.3.0-3806988011-kzkg6 kube-proxy-bootstrap-e2e-minion-group-bbwn rescheduler-v0.3.0-bootstrap-e2e-master monitoring-influxdb-grafana-v4-1q59k l7-default-backend-1044750973-zgxsc etcd-server-events-bootstrap-e2e-master kube-apiserver-bootstrap-e2e-master kube-proxy-bootstrap-e2e-minion-group-6nqb kube-proxy-bootstrap-e2e-minion-group-mzbz fluentd-gcp-v2.0-chd2x kube-dns-806549836-f8p46 fluentd-gcp-v2.0-44x97 kube-dns-autoscaler-2528518105-vlg8t fluentd-gcp-v2.0-p1h4b kube-controller-manager-bootstrap-e2e-master l7-lb-controller-v0.9.3-bootstrap-e2e-master kubernetes-dashboard-2917854236-tn3nx kube-dns-806549836-fq2fp kube-scheduler-bootstrap-e2e-master etcd-empty-dir-cleanup-bootstrap-e2e-master kube-addon-manager-bootstrap-e2e-master etcd-server-bootstrap-e2e-master fluentd-gcp-v2.0-6wf67]
I0524 02:09:56.184] May 24 02:09:56.183: INFO: At least one pod wasn't running and ready or succeeded at test start.
I0524 02:09:56.184] [AfterEach] [k8s.io] Restart [Disruptive]
```

Check the codes and found we always print out the last pod name, which is random. Pass the pod name into channel to fix.

**Release note**:

```release-note
NONE
```
2017-05-25 00:11:05 -07:00
Kubernetes Submit Queue f5bdd61b12 Merge pull request #46352 from humblec/gluster-mount-4
Automatic merge from submit-queue (batch tested with PRs 45913, 46065, 46352, 46363, 46373)

Dont exit if 'mount.glusterfs -V' resulted in an error.
2017-05-25 00:11:03 -07:00
Kubernetes Submit Queue 74f501935b Merge pull request #46065 from timstclair/audit-api
Automatic merge from submit-queue (batch tested with PRs 45913, 46065, 46352, 46363, 46373)

Update audit API with missing pieces

Follow-up to https://github.com/kubernetes/kubernetes/pull/45315 to resolve pending decisions & issues, including:

- Audit ID format
- Identifying audit event "stage"
- Request/Response object format (resolve conversion issue)
- Add a subresource field to the `ObjectReference`

For https://github.com/kubernetes/features/issues/22

~~TODO: Add generated code once we've reached consensus on the types.~~

/cc @deads2k @ihmccreery @sttts @soltysh @ericchiang
2017-05-25 00:11:01 -07:00
Kubernetes Submit Queue fe5b303365 Merge pull request #45913 from enj/enj/t/etcd_cohabitating_resources
Automatic merge from submit-queue (batch tested with PRs 45913, 46065, 46352, 46363, 46373)

Detect cohabitating resources in etcd storage test

**What this PR does / why we need it**:

This change updates the etcd storage path test to detect cohabitating resources by looking at their expected location in etcd.  This was not detected in the past because the GVK check did not span across groups.

To limit noise from failures caused by multiple objects at the same location in etcd, the test now fails when different GVRs share the same expected path.  Thus every object is expected to have a unique path.

@liggitt PTAL

Signed-off-by: Monis Khan <mkhan@redhat.com>

**Release note**:

```
NONE
```
2017-05-25 00:10:59 -07:00
Kubernetes Submit Queue 80171e5106 Merge pull request #46150 from bowei/ip-alias-service
Automatic merge from submit-queue (batch tested with PRs 46299, 46309, 46311, 46303, 46150)

Create a subnet for reserving the service cluster IP range

This will be done if IP aliases is enabled on GCP.

```release-note
NONE
```
2017-05-24 23:19:11 -07:00
Kubernetes Submit Queue ed8843406e Merge pull request #46303 from Random-Liu/fix-cos-image-project
Automatic merge from submit-queue (batch tested with PRs 46299, 46309, 46311, 46303, 46150)

Fix cos image project to cos-cloud.

Addressed https://github.com/kubernetes/kubernetes/pull/45136#discussion_r118092211.

@vishh @yujuhong @dchen1107
2017-05-24 23:19:09 -07:00
Kubernetes Submit Queue 8d88c55231 Merge pull request #46311 from dashpole/disable_ubuntu_gpu_test
Automatic merge from submit-queue (batch tested with PRs 46299, 46309, 46311, 46303, 46150)

Dont attach a GPU to ubuntu test machines for node e2e serial tests

This should fix flakes in the e2e_node serial suite.

@vishh I think this is what you were asking for...

/assign @vishh
2017-05-24 23:19:07 -07:00
Kubernetes Submit Queue b71ca6691b Merge pull request #46309 from Random-Liu/move-docker-validation-to-separate-project
Automatic merge from submit-queue (batch tested with PRs 46299, 46309, 46311, 46303, 46150)

Move docker validation test to separate project.

Docker validation test is leaking VMs because new docker version `DOCKER_VERSION=17.05.0-c` totally breaks the new gci image `GCE_IMAGES=gci-test-60-9579-0-0` with the `gci-docker-version` metadata specified.

The test successfully created the instance, but timed out when checking VM aliveness, and leaked the VM.

I've cleaned up all leaked VMs. This PR moves docker validation node e2e test into a separate project to not influencing other node e2e test.

@kewu1992 We should fix the docker automated validation test.

/cc @dchen1107 @yujuhong @abgworrall
2017-05-24 23:19:05 -07:00
Kubernetes Submit Queue 3c2e6a9f4d Merge pull request #46299 from ncdc/fix-DirectClientConfig-Namespace-override
Automatic merge from submit-queue (batch tested with PRs 46299, 46309, 46311, 46303, 46150)

Fix in-cluster kubectl --namespace override

**What this PR does / why we need it**:
Before this change, if the config was empty, ConfirmUsable() would
return an "invalid configuration" error instead of examining and
honoring the value of the --namespace flag. This change looks at the
overrides first, and returns the overridden value if it exists before
attempting to check if the config is usable. This is most applicable to
in-cluster clients, where they don't have a kubeconfig but they do have
a token and can use KUBERNETES_SERVICE_HOST/_PORT.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
The --namespace flag is now honored for in-cluster clients that have an empty configuration.
```

@kubernetes/sig-api-machinery-pr-reviews @fabianofranz @liggitt @deads2k @smarterclayton @caesarxuchao @soltysh
2017-05-24 23:18:59 -07:00
zhangxiaoyu-zidif 8e0add42f3 hollow-node.go:delete useless para. and import 2017-05-25 12:54:01 +08:00
Kubernetes Submit Queue cbd6b25c1c Merge pull request #46207 from zjj2wry/spea-space
Automatic merge from submit-queue

/pkg/client/listers: fix some typo

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-05-24 20:39:00 -07:00
Tim Hockin 2856fde23b Use BoundedFrequencyRunner in kube-proxy 2017-05-24 20:33:15 -07:00
Tim Hockin bbb80c252b Add bounded frequency runner
This lib manages runs of a function to have min and max frequencies.
2017-05-24 20:33:15 -07:00
Tim Hockin 3153ca2815 Inject clock through flowcontrol 2017-05-24 20:33:15 -07:00
Tim Hockin 3178433b9f Update godeps for juju ratelimit
This picked up an unrelated but missing change.
2017-05-24 20:33:15 -07:00
Tim Hockin 578d9fcf63 Logging/naming cleanup for service port names 2017-05-24 20:33:15 -07:00
Kubernetes Submit Queue 9812856088 Merge pull request #45317 from ericchiang/oidc-client-update
Automatic merge from submit-queue

oidc client plugin: reduce round trips and fix scopes requested

This PR attempts to simplify the OpenID Connect client plugin to
reduce round trips. The steps taken by the client are now:

* If ID Token isn't expired:
   * Do nothing.
* If ID Token is expired:
   * Query /.well-known discovery URL to find token_endpoint.
   * Use an OAuth2 client and refresh token to request new ID token.

This avoids the previous pattern of always initializing a client,
which would hit the /.well-known endpoint several times.

The client no longer does token validation since the server already
does this. As a result, this code no longer imports
github.com/coreos/go-oidc, instead just using golang.org/x/oauth2
for refreshing.

Overall reduction in tests because we're not verify as many things
on the client side. For example, we're no longer validating the
id_token signature (again, because it's being done on the server
side).

This has been manually tested against dex, and I hope to continue
to test this over the 1.7 release cycle.

cc @mlbiam @frodenas @curtisallen @jsloyer @rithujohn191 @philips @kubernetes/sig-auth-pr-reviews 

```release-note
NONE
```

Updates https://github.com/kubernetes/kubernetes/issues/42654
Closes https://github.com/kubernetes/kubernetes/issues/37875
Closes https://github.com/kubernetes/kubernetes/issues/37874
2017-05-24 19:49:26 -07:00
NickrenREN add091b1fb fix regression in UX experience for double attach volume
send event when volume is not allowed to multi-attach
2017-05-25 09:27:24 +08:00
Dong Liu fb26c9100a Support TCP type runtime endpoint for kubelet. 2017-05-25 09:16:11 +08:00
Rohit Agarwal 0f5cc4027f Implement FakeVolumePlugin's ConstructVolumeSpec method according to interface expectation.
This fixes #45803 and #46204.
2017-05-24 17:26:34 -07:00
System Administrator 9c8e92b8ff e2e tests for storage policy support in Kubernetes 2017-05-24 16:39:00 -07:00
ymqytw 7e3d250da4 should not sort when comparing sort results 2017-05-24 16:34:17 -07:00
Kubernetes Submit Queue ee0de5f376 Merge pull request #46268 from jianglingxia/jlx523
Automatic merge from submit-queue

fix the invalid link

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-05-24 16:17:23 -07:00
Kubernetes Submit Queue aeeadb0c03 Merge pull request #46329 from zjj2wry/DeamonSet-DaemonSet
Automatic merge from submit-queue

DeamonSet-DaemonSet

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-05-24 16:17:15 -07:00
Kubernetes Submit Queue de1ebf8118 Merge pull request #44443 from jamiehannaford/kubelet-tc
Automatic merge from submit-queue

Bump kubelet/networks test coverage

**What this PR does / why we need it**:

Bumps test coverage

**Which issue this PR fixes**:

https://github.com/kubernetes/kubernetes/issues/40780
https://github.com/kubernetes/kubernetes/issues/39559

**Special notes for your reviewer**:

Writing positive test cases for these lines:
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/networks.go#L38 https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/networks.go#L69 
is quite difficult, so the former has a negative case and the latter has no test coverage.

**Release note**:
```release-note
New tests for kubelet/networks
```
2017-05-24 16:17:08 -07:00
Harsh Desai ad4f21f26c Dedup common code for fetching portworx driver 2017-05-24 14:52:04 -07:00
Harsh Desai bbfda9cdfe Remove call to common unmount routine as Portworx takes care of all umount workflow 2017-05-24 14:52:03 -07:00
Harsh Desai 779455aa32 fix bazel build 2017-05-24 14:52:03 -07:00
Harsh Desai e860da4bd2 Use Portworx service as api endpoint for volume operations 2017-05-24 14:52:03 -07:00
Harsh Desai 244a0b7b7e Add support for Portworx plugin to query remote API servers 2017-05-24 14:52:03 -07:00
Timothy St. Clair 1fb55a567d Update RBAC policy for configmap locked leader leasing. 2017-05-24 16:32:12 -05:00
Kubernetes Submit Queue 89a76b8c8b Merge pull request #46128 from jagosan/master
Automatic merge from submit-queue

Added deprecation notice and guidance for cloud providers.

**What this PR does / why we need it**:
Adding context/background and general guidance for incoming cloud providers. 

**Which issue this PR fixes** 

**Special notes for your reviewer**:
Generalized message per discussion with @bgrant0607
2017-05-24 14:19:01 -07:00
Kubernetes Submit Queue c1d6439fe3 Merge pull request #46262 from xilabao/fix-message-in-storage-extensions
Automatic merge from submit-queue

fix err message in storage extensions

**Release note**:

```release-note
`NONE`
```
2017-05-24 14:18:53 -07:00
Kubernetes Submit Queue b3181ec2f3 Merge pull request #46305 from sjenning/init-container-status
Automatic merge from submit-queue

clear init container status annotations when cleared in status

When I pod with an init container is terminated due to exceeding its active deadline, the pod status is phase `Failed` with reason `DeadlineExceeded`.  All container statuses are cleared from the pod status.

With init containers, however, the status is being regenerated from the status annotations.  This is causing kubectl to report the pod state as `Init:0/1` instead of `DeadlineExceeded` because the kubectl printer observes a running init container, which in reality is not running.

This PR clears out the init container status annotations when they have been removed from the pod status so they are not regenerated on the apiserver.

xref https://bugzilla.redhat.com/show_bug.cgi?id=1453180

@derekwaynecarr 

```release-note
Fix init container status reporting when active deadline is exceeded.
```
2017-05-24 14:18:45 -07:00
David Ashpole 1a6572fc6c summary test now tests a pod that has containers that have restarted 2017-05-24 13:27:57 -07:00
Clayton Coleman ad431c454c
Subresources are not included in apiserver prometheus metrics
Subresources are very often completely different code paths and errors
generated on those code paths are important to distinguish.
2017-05-24 16:23:50 -04:00
Jonathan MacMillan 748ea1109d [Federation] Uniquify the ClusterRole and ClusterRoleBinding names created by . 2017-05-24 12:04:16 -07:00
deads2k ba5a1113e6 don't queue namespaces for deletion if the namespace isn't deleted 2017-05-24 14:47:53 -04:00
Nick Sardo e7ee3913d7 Add subnetworkUrl param to e2e 2017-05-24 10:54:51 -07:00
Nick Sardo 68e7e18698 Set NODE_SUBNETWORK env var in gce.conf 2017-05-24 10:23:08 -07:00
Zihong Zheng 03d08623e8 Fix CheckPodsCondition to print out the correct podName 2017-05-24 10:20:57 -07:00
emaildanwilson c68bf0b260 add ClusterSelector to services 2017-05-24 09:57:04 -07:00
Nick Sardo 435303c647 Add subnetworkURL to GCE provider 2017-05-24 09:35:51 -07:00
Humble Chirammal 55808add37 Dont exit if 'mount.glusterfs -V' resulted in an error.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2017-05-24 21:07:58 +05:30