Commit Graph

9498 Commits (98277ff20bcdb3b91a1cfebe53efc024323a037e)

Author SHA1 Message Date
jeff vance 2bd0cd2fd4 fixes issue 56041 2017-11-21 19:35:05 -08:00
Kubernetes Submit Queue 754017bef4
Merge pull request #56105 from balajismaniam/enable-cpuman-only-when-not-skipped
Automatic merge from submit-queue (batch tested with PRs 55340, 55329, 56168, 56170, 56105). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Enable cpu manager only if the node e2e test is not skipped.

**What this PR does / why we need it**: This PR enables cpu manager in Kubelet only if the node e2e tests are not skipped. This change fixes the failures seen in https://k8s-testgrid.appspot.com/sig-node-kubelet#kubelet-serial-gce-e2e. 

Fixes #56144
2017-11-21 18:56:39 -08:00
Kubernetes Submit Queue 3bb6eeeb07
Merge pull request #55340 from jiayingz/metrics
Automatic merge from submit-queue (batch tested with PRs 55340, 55329, 56168, 56170, 56105). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Adds device plugin allocation latency metric.

For #53497


**What this PR does / why we need it**:

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note

```
2017-11-21 18:56:29 -08:00
Di Xu 344fe56ed3 change DefaultGarbageCollectionPolicy to DeleteDependents for workload controllers 2017-11-22 10:09:44 +08:00
Kubernetes Submit Queue 94a8d81172
Merge pull request #55447 from jingxu97/Nov/podmetric
Automatic merge from submit-queue (batch tested with PRs 55812, 55752, 55447, 55848, 50984). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add Pod-level local ephemeral storage metric in Summary API

This PR adds pod-level ephemeral storage metric into Summary API.
Pod-level ephemeral storage usage is the sum of all containers and local
ephemeral volume including EmptyDir (if not backed up by memory or
hugepages), configueMap, and downwardAPI.
Address issue #55978

**Release note**:
```release-note
Add pod-level local ephemeral storage metric in Summary API. Pod-level ephemeral storage reports the total filesystem usage for the containers and emptyDir volumes in the measured Pod.
```
2017-11-21 17:57:34 -08:00
Kubernetes Submit Queue 4cafc5459b
Merge pull request #56004 from caesarxuchao/admission-v1beta1
Automatic merge from submit-queue (batch tested with PRs 56128, 56004, 56083, 55833, 56042). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Graduate the admission and admissionregistration (webhook part) API to v1beta1

ref: kubernetes/features#492

Most changes are mechanical. Please take a look at the commit message to see if the commit is worth reviewing.

```release-note
Action required:
The `admission/v1alpha1` API has graduated to `v1beta1`. Please delete your existing webhooks before upgrading the cluster, and update your admission webhooks to use the latest API, because the API has backwards incompatible changes.
The webhook registration related part of the `admissionregistration` API has graduated to `v1beta1`. Please delete your existing configurations before upgrading the cluster, and update your configuration file to use the latest API.
```
2017-11-21 17:04:54 -08:00
Kubernetes Submit Queue 2ba1d9916b
Merge pull request #56128 from MrHohn/fix-ingress-before-each
Automatic merge from submit-queue (batch tested with PRs 56128, 56004, 56083, 55833, 56042). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Don't call f.BeforeEach() again in ingress suite

**What this PR does / why we need it**: Calling f.BeforeEach() explicitly in ingress suite is causing test panics. See #56089.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes (hopefully) #56089

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-11-21 17:04:50 -08:00
Michael Taufen cbebb61450 Kubelet flags take precedence over config from files/ConfigMaps
Changes the Kubelet configuration flag precedence order so that flags
take precedence over config from files/ConfigMaps.

See issue #56171 for more details.

Also modifies e2e node test suite to transform all relevant Kubelet
flags into a config file before starting tests when the
KubeletConfigFile feature gate is true, and turns on the
KubeletConfigFile gate for all e2e node tests. This allows the alpha
dynamic Kubelet config feature to continue to work in tests after
the precedence change.
2017-11-21 16:02:27 -08:00
Kubernetes Submit Queue 61792ef482
Merge pull request #55786 from porridge/debug-e2e-pod-wait
Automatic merge from submit-queue (batch tested with PRs 54316, 53400, 55933, 55786, 55794). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Improve messages around waiting for pods.

**What this PR does / why we need it**:

This is a step towards solving #55785

**Release note**:
```release-note
NONE
```
2017-11-21 15:04:31 -08:00
Kubernetes Submit Queue 34b258ca4b
Merge pull request #55933 from bsalamat/starvation3
Automatic merge from submit-queue (batch tested with PRs 54316, 53400, 55933, 55786, 55794). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add support to take nominated pods into account during scheduling to avoid starvation of higher priority pods

**What this PR does / why we need it**:
When a pod preempts lower priority pods, the preemptor gets a "nominated node name" annotation. We call such a pod a nominated pod. This PR adds the logic to take such nominated pods into account when scheduling other pods on the same node that the nominated pod is expected to run. This is needed to avoid starvation of preemptor pods. Otherwise, lower priority pods may fill up the space freed after preemption before the preemptor gets a chance to get scheduled.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #54501

**Special notes for your reviewer**: This PR is built on top of #55109 and includes all the changes there as well.

**Release note**:

```release-note
Add support to take nominated pods into account during scheduling to avoid starvation of higher priority pods.
```

/sig scheduling
ref/ #47604
2017-11-21 15:04:28 -08:00
Kubernetes Submit Queue 03b7d77be4
Merge pull request #54316 from dashpole/disk_request_eviction
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Take disk requests into account during evictions

fixes #54314

This PR is part of the local storage feature, and it makes the eviction manager take disk requests into account during disk evictions.
This uses the same eviction strategy as we do for memory.
Disk requests are only considered when the LocalStorageCapacityIsolation feature gate is enabled.  This is enforced by adding a check for the feature gate in getRequests().
I have added unit testing to ensure that previous behavior is preserved when the feature gate is disabled.
Most of the changes are testing.  Reviewers should focus on changes in **eviction/helpers.go**

/sig node
/assign @jingxu97  @vishh
2017-11-21 14:31:47 -08:00
Jiaying Zhang 048bafdd0b Adds device plugin registration count metric and allocation latency metric. 2017-11-21 13:44:10 -08:00
Chao Xu fcf4f15c89 update-all generated 2017-11-21 13:00:40 -08:00
Chao Xu 7945ae68d0 remove reference to v1alpha1 2017-11-21 13:00:40 -08:00
Kubernetes Submit Queue da96ce00e5
Merge pull request #56117 from jiayingz/deviceplugin-addon-config
Automatic merge from submit-queue (batch tested with PRs 56021, 55843, 55088, 56117, 55859). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Changes nvidia-gpu device plugin addon config settings:

- Runs as system critical pod
- Makes resource limits to match its resource requets
- Modifies test/e2e/scheduling/nvidia-gpus.go to cope with the recent
change of running the device plugin as a system addon.
- The resource settings of the addon is based on the test results
from 8 nvidia-tesla-k80 gpus.



**What this PR does / why we need it**:

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note

```
2017-11-21 12:16:57 -08:00
Kubernetes Submit Queue 5242f01e8c
Merge pull request #55088 from jiayingz/capacity
Automatic merge from submit-queue (batch tested with PRs 56021, 55843, 55088, 56117, 55859). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Extends deviceplugin to gracefully handle full device plugin lifecycle.

**What this PR does / why we need it**:
- Instead of using cm.capacity field to communicate device plugin resource capacity,
this PR changes to use an explicit cm.GetDevicePluginResourceCapacity() function
that returns device plugin resource capacity as well as any inactive device plugin resource.
Kubelet syncNodeStatus call this function during its periodic run to update node status
capacity and allocatable. After this call, device plugin can remove the inactive device
plugin resource from its allDevices field as the update is already pushed to API server.
- Extends device plugin checkpoint data to record registered resources
so that we can finish resource removing even upon kubelet restarts.
- Passes sourcesReady from kubelet to device plugin to avoid removing
inactive pods during grace period of kubelet restart.
- Extends gpu_device_plugin e2e_node test to verify that scheduled pods
can continue to run even after device plugin deletion and kubelet
restarts.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Together with https://github.com/kubernetes/kubernetes/pull/54488, fixes https://github.com/kubernetes/kubernetes/issues/53395

**Special notes for your reviewer**:

**Release note**:

```release-note
Extends deviceplugin to gracefully handle full device plugin lifecycle.
```
2017-11-21 12:16:54 -08:00
Balaji Subramaniam 16e0f12253 Enable cpu manager only if the test is not skipped.
- Also, if KubeReserved is nil, allocate a map.
2017-11-21 10:48:54 -08:00
David Ashpole 8b3bd5ae60 take disk requests into account during evictions 2017-11-21 10:21:30 -08:00
Kubernetes Submit Queue 80e1c7907e
Merge pull request #52322 from davidz627/multizoneWrongZone
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fixes issue where PVCs using `standard` StorageClass create PDs in disks in wrong zone in multi-zone GKE clusters

Fixes #50115

Changed GetAllZones to only get zones with nodes that are currently running (renamed to GetAllCurrentZones). Added E2E test to confirm this behavior.
2017-11-21 01:35:01 -08:00
Jiaying Zhang 990113ce60 Extends gpu_device_plugin e2e_node test to verify that scheduled pods
can continue to run even after device plugin deletion and kubelet
restarts.
2017-11-20 23:40:27 -08:00
Bobby (Babak) Salamat eda3df8732 Autogenerated files 2017-11-20 22:17:06 -08:00
Bobby (Babak) Salamat 8a17ae241d Add logic to account for pods nominated to run on nodes, but are not running yet.
Add tests for the new logic.
2017-11-20 22:17:05 -08:00
Kubernetes Submit Queue 9fe2a62b90
Merge pull request #55338 from dashpole/remove_disk_allocatable
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Remove Ephemeral Storage Allocatable Evictions

Issue #52336

Rationale and docs change: https://github.com/kubernetes/community/pull/1275

cc @kubernetes/sig-node-pr-reviews 
cc @derekwaynecarr @vishh 
/assign @jingxu97 
/assign @dchen1107
2017-11-20 21:43:24 -08:00
Kubernetes Submit Queue e201d34296
Merge pull request #55845 from vmware/multi-vc-upstream
Automatic merge from submit-queue (batch tested with PRs 55112, 56029, 55740, 56095, 55845). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Updating vsphere cloud provider to support k8s cluster spread across multiple vCenters

**What this PR does / why we need it**:

vSphere cloud provider in Kubernetes 1.8 was designed to work only if all the nodes of the cluster are in one single datacenter folder. This is a hard restriction that makes the cluster not span across different folders/datacenter/vCenters. Users have use-cases to span the cluster across datacenters/vCenters.

**Which issue(s) this PR fixes** 
Fixes # https://github.com/vmware/kubernetes/issues/255

**Special notes for your reviewer**:
This is a change purely in vsphere cloud provider and no changes in kubernetes core are needed.

**Release note**:
```release-note
With this change
 - User should be able to create k8s cluster which spans across multiple ESXi clusters, datacenters or even vCenters.
 - vSphere cloud provider (VCP) uses OS hostname and not vSphere Inventory VM Name.
   That means, now  VCP can handle cases where user changes VM inventory name.
- VCP can handle cases where VM migrates to other ESXi cluster or datacenter or vCenter.

The only requirement is the shared storage. VCP needs shared storage on all Node VMs.
```

Internally tested and reviewed the code.

@tthole, @shaominchen, @abrarshivani
2017-11-20 21:03:50 -08:00
Kubernetes Submit Queue e24b5532a5
Merge pull request #55911 from davidz627/localSSDUUID
Automatic merge from submit-queue (batch tested with PRs 54824, 55911, 55730, 55979, 55961). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add options for mounting SCSI or NVMe local SSD though Block or Filesystem and do all of that with UUID

Fixes: #51431
Fixed version of: #53466

Mount SCSI local SSD by UUID in /mnt/disks/by-uuid/, also allows for users to request and mount NVMe disks. Both types of disks will be accessible either through block or file-system.

I have confirmed that it is no longer crashing when nodes are initialized on GKE.
2017-11-20 20:13:33 -08:00
Zihong Zheng 11d283ebf4 Don't call BeforeEach() again in ingress suite 2017-11-20 20:01:22 -08:00
Jiaying Zhang 4a1a205109 Changes nvidia-gpu device plugin addon config settings:
- Runs as system critical pod
- Makes resource limits to match its resource requets
- Modifies test/e2e/scheduling/nvidia-gpus.go to cope with the recent
change of running the device plugin as a system addon.
- The resource settings of the addon is based on the test results
from 8 nvidia-tesla-k80 gpus.
2017-11-20 17:32:53 -08:00
Jing Xu 75ef18c4d3 Add Pod-level local ephemeral storage metric in Summary API
This PR adds pod-level ephemeral storage metric into Summary API.
Pod-level ephemeral storage usage is the sum of all containers and local
ephemeral volume including EmptyDir (if not backed up by memory or
hugepages), configueMap, and downwardAPI.
2017-11-20 16:32:38 -08:00
David Zhu e5aec8645d Changed GetAllZones to only get zones with nodes that are currently
running (renamed to GetAllCurrentZones). Added E2E test to confirm this
behavior.

Added node informer to cloud-provider controller to keep track of zones
with k8s nodes in them.
2017-11-20 16:04:18 -08:00
Jun Xiang Tee 25469e9b44 convert testScaledRolloutDeployment e2e test to integration test 2017-11-20 15:36:27 -08:00
Kubernetes Submit Queue 2cbb07a439
Merge pull request #55871 from atlassian/unstructured-converter-no-mutation
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix potential unexpected object mutation that can lead to data races

**What this PR does / why we need it**:
In #51526 I introduced an optimization - do a deep copy instead of to and from JSON roundtrip to convert anything that implements `runtime.Unstructured`. I just discovered that the method that is used there `UnstructuredContent()` in both `Unstructured` and `UnstructuredList` may mutate the original object.
2008750398/staging/src/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured/unstructured.go (L87-L92)
7c10cbc642/staging/src/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured/unstructured_list.go (L58-L75)
This is problematic because previously (before #51526) there was no mutation and because this is unexpected and may lead to data races - it is bad behaviour to mutate original object when you just want a copy of it.
This PR fixes the issue.

Without the fix the tests I've added are failing because when comparison is done original object is not the same:
```
converter_test.go:154: Object changed, diff: 
object.Object[items]:
  a: []interface {}{}
  b: <nil>
converter_test.go:154: Object changed, diff: 
object.Object[items]:
  a: []interface {}{map[string]interface {}{"kind":"Pod"}}
  b: <nil>
```

However the underlying issue is not fixed here - `UnstructuredContent()` is brittle and dangerous. Method name does not imply that it mutates data when you call it. And godoc does not mention that either:
509df603b1/staging/src/k8s.io/apimachinery/pkg/runtime/interfaces.go (L233-L249)
Something needs to be done about it IMO.
Also `UnstructuredContent()` implementation in `UnstructuredList` does not implement the behaviour required by godoc in `runtime.Unstructured`.

**Release note**:
```release-note
NONE
```
/kind bug
/sig api-machinery
/assign @sttts
2017-11-20 08:58:37 -08:00
Kubernetes Submit Queue d4724d7e43
Merge pull request #55056 from porridge/typo-percentil
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix a typo.

**Release note**:
```release-note
NONE
```
2017-11-20 01:40:50 -08:00
Kubernetes Submit Queue dcdb423ef4
Merge pull request #55186 from bcreane/named-port-egress
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

NetworkPolicy e2e: named port egress test

**What this PR does / why we need it**:
Add an e2e NetworkPolicy test that ensures that an egress rule that specifies a named port properly applies to egress traffic.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #52040

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-11-19 19:57:17 -08:00
andyzhangx 310168c1d2 fix CreateVolume: search mode for Dedicated kind 2017-11-19 11:16:50 +00:00
Mikhail Mazurskiy 3e342077d5
Fix potential unexpected object mutation that can lead to data races 2017-11-19 08:54:25 +11:00
Kubernetes Submit Queue 3679b54b19
Merge pull request #55898 from dashpole/fix_flaky_allocatable
Automatic merge from submit-queue (batch tested with PRs 54837, 55970, 55912, 55898, 52977). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix Flaky Allocatable Setup Tests

**What this PR does / why we need it**:
Fixes a flaky node e2e serial test.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #55830

**Special notes for your reviewer**:
The test was flaking because we were reading the node status before the restarted kubelet had written it.
This fixes this by waiting until we see an updated node status (looking at the condition's heartbeat time).
This also fixes an incorrect error message.

**Release note**:
```release-note
NONE
```
2017-11-18 13:13:24 -08:00
Kubernetes Submit Queue 7d1085e122
Merge pull request #54837 from xiangpengzhao/conf-test
Automatic merge from submit-queue (batch tested with PRs 54837, 55970, 55912, 55898, 52977). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Use framework.ConformanceIt for node e2e conformance tests

**What this PR does / why we need it**:

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
ref #54726 #53909

**Special notes for your reviewer**:
/cc @mml 

**Release note**:

```release-note
NONE
```
2017-11-18 13:13:17 -08:00
Kubernetes Submit Queue 87d45a54bd
Merge pull request #55940 from shyamjvs/reduce-spam-from-resource-gatherer
Automatic merge from submit-queue (batch tested with PRs 55233, 55927, 55903, 54867, 55940). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Control logs verbosity in resource gatherer

PR https://github.com/kubernetes/kubernetes/pull/53541 added some logging in resource gatherer which is a bit too verbose for normal purposes.
As a result, we're seeing a lot of spam in our large cluster performance tests (e.g - https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-scalability/8046/build-log.txt)

This PR is making the verbosity of those logs controllable through an option. It's off by default, but turning it on for the gpu test to preserve behavior there.

/cc @jiayingz @mindprince
2017-11-18 12:26:18 -08:00
Kubernetes Submit Queue 941c6aa1db
Merge pull request #55835 from smarterclayton/table_printer_meta
Automatic merge from submit-queue (batch tested with PRs 55642, 55897, 55835, 55496, 55313). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Table printers and server generation should always copy ListMeta

Tables should be a mapping from lists, so if the incoming object has these add them to the table. Paging over server side tables was broken without this. Add tests on the generic creater and on the resttest compatibility.


@deads2k
2017-11-18 10:46:35 -08:00
Kubernetes Submit Queue ef3b27cbd4
Merge pull request #55642 from dashpole/disable_cadvisor_disk_for_cri
Automatic merge from submit-queue (batch tested with PRs 55642, 55897, 55835, 55496, 55313). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Disable container disk metrics when using the CRI stats integration

Issue: https://github.com/kubernetes/kubernetes/issues/51798

As explained in the issue, runtimes which make use of the CRI Stats API still have the performance overhead of collecting those same stats through cAdvisor.
The CRI Stats API has metrics for CPU, Memory, and Disk.  This PR significantly reduces the added overhead due to collecting these stats in both cAdvisor and in the runtime.
This PR disables container disk metrics, which are very expensive to collect.

This PR does not disable node-level disk stats, as the "Raw" container handler does not currently respect ignoring DiskUsageMetrics.
This PR factors out the logic for determining whether or not to use the CRI stats provider into a helper function, as cAdvisor is instantiated before it is passed to the kubelet as a dependency.

cc @kubernetes/sig-node-pr-reviews @derekwaynecarr  
/kind feature
/sig node

/assign @Random-Liu @derekwaynecarr
2017-11-18 10:46:30 -08:00
David Ashpole 527611ee41 remove disk allocatable evictions 2017-11-18 10:34:59 -08:00
Kubernetes Submit Queue 2d972c19bf
Merge pull request #55737 from mindprince/update-nvidia-urls
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Update URLs for nvidia gpu device plugin and nvidia driver installer.

Device plugin is now an addon and its manifest is now in kubernetes/kubernetes. The manifest on
GoogleCloudPlatform/container-engine-accelerators no longer contains device plugin.

This is needed after https://github.com/kubernetes/kubernetes/pull/54826 and https://github.com/GoogleCloudPlatform/container-engine-accelerators/pull/25

**Release note**:
```release-note
NONE
```

/sig scheduling
2017-11-18 09:36:05 -08:00
Kubernetes Submit Queue 2a711199db
Merge pull request #55705 from krzysztof-jastrzebski/e2e
Automatic merge from submit-queue (batch tested with PRs 54556, 55379, 55881, 55891, 55705). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Adds node auto-repair e2e tests.

This PR adds node auto-repair e2e tests.
2017-11-18 07:53:48 -08:00
Chao Xu 0b3ee54076 fix webhook e2e test cleanup 2017-11-17 21:02:47 -08:00
Chao Xu 6193360eb5 generated bazel 2017-11-17 21:02:47 -08:00
Chao Xu ea123f82aa Adding the mutating webhook 2017-11-17 21:02:47 -08:00
Kubernetes Submit Queue 2aaab817de
Merge pull request #55420 from cblecker/go1.9.2
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Upgrade to go1.9.2

**What this PR does / why we need it**:
Use go1.9.2, containing a number of bug fixes: https://github.com/golang/go/issues?q=milestone%3AGo1.9.2

**Release note**:
```release-note
Upgrade to go1.9.2
```
2017-11-17 20:24:42 -08:00
Derek Carr db89b46ce7 kubelet summary api test updates 2017-11-17 22:30:49 -05:00
Andy Xie 64a8edfbcf fix network value for stats summary 2017-11-18 10:17:59 +08:00
Christoph Blecker 82737e730c
Upgrade to go1.9.2 2017-11-17 16:27:17 -08:00
rohitjogvmw 79e1da68d2 Updating vSphere Cloud Provider (VCP) to support k8s cluster spead across multiple ESXi clusters, datacenters or even vSphere vCenters
- vsphere.conf (cloud-config) is now needed only on master node
   - VCP uses OS hostname and not vSphere inventory name
   - VCP is now resilient to VM inventory name change and VM migration
2017-11-17 14:49:32 -08:00
cheftako dac3c2e168 Admission request/response handling
AdmissionResponse allows mutating webhook to send apiserver a json patch
to mutate the object.
This reflects the imperative nature of AdmissionReview. It adds
AdmissionRequest and AdmissionResponse in place of status/spec.
The AdmissionResponse the allows the mutating webhook
to send back a json path with the mutated version of the requested
object.
Fixed the integration test to clean up properly.
Switched test image to 1.8v5 to reflect API changes.
Make sure to cache test framework client for cleaup test code.
Switched to pointer for patch type.
Factored in @liggitt's feedback.
Factored in @lavalamp's feedback.
2017-11-17 14:22:55 -08:00
Kubernetes Submit Queue 0881a2281e
Merge pull request #55525 from miaoyq/fixes-55505
Automatic merge from submit-queue (batch tested with PRs 55254, 55525, 50108, 54674, 55263). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Correct clean up actions in e2e tests

**What this PR does / why we need it**:
Remove the duplicate "cleanup action" code in `test/e2e/e2e.go`, and use the clean up code in test/e2e/framework instead.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #55505 

**Special notes for your reviewer**:

**Release note**:

```release-note

```
2017-11-17 13:34:08 -08:00
David Zhu f780eefd39 Set up alternate mount point for SCSI/NVMe local SSD by UUID in /mnt/disks/by-uuid/, set up ability to have unformatted disk symlinked in /dev/disk/by-uuid/. Added tests. Preserved backwards compatibility. 2017-11-17 10:56:48 -08:00
Aleksandra Malinowska f11c35eb29 Create sig-autoscaling-maintainers alias 2017-11-17 17:57:33 +01:00
Clayton Coleman 8db90f1ee6
API chunking tests should fail if limit is breached
Chunking is now beta and on by default. The kops job is still using
etcd2 which does not support chunking, so flag the test as skipped until
kops is updated to a supported etcd version.
2017-11-17 10:30:35 -05:00
Clayton Coleman d2a62fd422 Table printers and server generation should always copy ListMeta
Tables should be a mapping from lists, so if the incoming object has
these add them to the table. Allows paging over server side tables.
Add tests on the generic creater and on the resttest compatibility.
2017-11-17 10:30:32 -05:00
Shyam Jeedigunta fce28995e1 Control logs verbosity in resource gatherer 2017-11-17 13:03:32 +01:00
Kubernetes Submit Queue 00fe2cfe6c
Merge pull request #54823 from mtaufen/structure-eviction-thresholds
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Lift embedded structure out of eviction-related KubeletConfiguration fields

- Changes the following KubeletConfiguration fields from `string` to
`map[string]string`:
  - `EvictionHard`
  - `EvictionSoft`
  - `EvictionSoftGracePeriod`
  - `EvictionMinimumReclaim`
- Adds flag parsing shims to maintain Kubelet's public flags API, while
enabling structured input in the file API.
- Also removes `kubeletconfig.ConfigurationMap`, which was an ad-hoc flag
parsing shim living in the kubeletconfig API group, and replaces it
with the `MapStringString` shim introduced in this PR. Flag parsing
shims belong in a common place, not in the kubeletconfig API.
I manually audited these to ensure that this wouldn't cause errors
parsing the command line for syntax that would have previously been
error free (`kubeletconfig.ConfigurationMap` was unique in that it
allowed keys to be provided on the CLI without values. I believe this was
done in `flags.ConfigurationMap` to facilitate the `--node-labels` flag,
which rightfully accepts value-free keys, and that this shim was then
just copied to `kubeletconfig`). Fortunately, the affected fields
(`ExperimentalQOSReserved`, `SystemReserved`, and `KubeReserved`) expect
non-empty strings in the values of the map, and as a result passing the
empty string is already an error. Thus requiring keys shouldn't break
anyone's scripts.
- Updates code and tests accordingly.

Regarding eviction operators, directionality is already implicit in the
signal type (for a given signal, the decision to evict will be made when
crossing the threshold from either above or below, never both). There is
no need to expose an operator, such as `<`, in the API. By changing
`EvictionHard` and `EvictionSoft` to `map[string]string`, this PR
simplifies the experience of working with these fields via the
`KubeletConfiguration` type. Again, flags stay the same.

Other things:
- There is another flag parsing shim, `flags.ConfigurationMap`, from the
shared flag utility. The `NodeLabels` field still uses
`flags.ConfigurationMap`. This PR moves the allocation of the
`map[string]string` for the `NodeLabels` field from
`AddKubeletConfigFlags` to the defaulter for the external
`KubeletConfiguration` type. Flags are layered on top of an internal
object that has undergone conversion from a defaulted external object,
which means that previously the mere registration of flags would have
overwritten any previously-defined defaults for `NodeLabels` (fortunately
there were none).

Related: #53833 (lifting embedded structures out of string fields is part of getting this API to beta)

```release-note
The EvictionHard, EvictionSoft, EvictionSoftGracePeriod, EvictionMinimumReclaim, SystemReserved, and KubeReserved fields in the KubeletConfiguration object (kubeletconfig/v1alpha1) are now of type map[string]string, which facilitates writing JSON and YAML files.
```
2017-11-17 02:57:30 -08:00
xiangpengzhao 6318fcca85 Update BUILD file to include e2e_node tests 2017-11-17 17:28:29 +08:00
xiangpengzhao 025f946784 Update conformance testdata for e2e node conformance tests 2017-11-17 17:28:28 +08:00
xiangpengzhao 7fdea2b0cf Use framework.ConformanceIt for node e2e conformance tests 2017-11-17 17:28:20 +08:00
Kubernetes Submit Queue ebd3d68039
Merge pull request #55831 from Random-Liu/rename-log-dump-env
Automatic merge from submit-queue (batch tested with PRs 55392, 55491, 51914, 55831, 55836). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Rename log-dump env to `LOG_DUMP_SYSTEMD_SERVICES`.

For https://github.com/kubernetes/features/issues/286.

Rename `SYSTEMD_SERVICES` to `LOG_DUMP_SYSTEMD_SERVICES`. test-infra disables log dump in our e2e framework, and uses a different log dump logic https://github.com/kubernetes/test-infra/blob/master/kubetest/e2e.go#L480-L497. So the flags we added in https://github.com/kubernetes/kubernetes/pull/55288 will not work in test-infra.

Fortrunately, test-infra is using the same script `cluster/log-dump/log-dump.sh`, so we could still configure systemd services by setting the environment variable globally.

The original environment variable name is too general for setting globally, change it to a more specific name.

**Release note**:

```release-note
none
```
2017-11-17 00:18:25 -08:00
Kubernetes Submit Queue 8413f36aa3
Merge pull request #55392 from sttts/sttts-remove-policy-v1alpha1
Automatic merge from submit-queue (batch tested with PRs 55392, 55491, 51914, 55831, 55836). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Remove unused pkg/apis/policy/v1alpha1
2017-11-17 00:18:17 -08:00
Michael Taufen 1085b6f730 Lift embedded structure out of eviction-related KubeletConfiguration fields
- Changes the following KubeletConfiguration fields from `string` to
`map[string]string`:
  - `EvictionHard`
  - `EvictionSoft`
  - `EvictionSoftGracePeriod`
  - `EvictionMinimumReclaim`
- Adds flag parsing shims to maintain Kubelet's public flags API, while
enabling structured input in the file API.
- Also removes `kubeletconfig.ConfigurationMap`, which was an ad-hoc flag
parsing shim living in the kubeletconfig API group, and replaces it
with the `MapStringString` shim introduced in this PR. Flag parsing
shims belong in a common place, not in the kubeletconfig API.
I manually audited these to ensure that this wouldn't cause errors
parsing the command line for syntax that would have previously been
error free (`kubeletconfig.ConfigurationMap` was unique in that it
allowed keys to be provided on the CLI without values. I believe this was
done in `flags.ConfigurationMap` to facilitate the `--node-labels` flag,
which rightfully accepts value-free keys, and that this shim was then
just copied to `kubeletconfig`). Fortunately, the affected fields
(`ExperimentalQOSReserved`, `SystemReserved`, and `KubeReserved`) expect
non-empty strings in the values of the map, and as a result passing the
empty string is already an error. Thus requiring keys shouldn't break
anyone's scripts.
- Updates code and tests accordingly.

Regarding eviction operators, directionality is already implicit in the
signal type (for a given signal, the decision to evict will be made when
crossing the threshold from either above or below, never both). There is
no need to expose an operator, such as `<`, in the API. By changing
`EvictionHard` and `EvictionSoft` to `map[string]string`, this PR
simplifies the experience of working with these fields via the
`KubeletConfiguration` type. Again, flags stay the same.

Other things:
- There is another flag parsing shim, `flags.ConfigurationMap`, from the
shared flag utility. The `NodeLabels` field still uses
`flags.ConfigurationMap`. This PR moves the allocation of the
`map[string]string` for the `NodeLabels` field from
`AddKubeletConfigFlags` to the defaulter for the external
`KubeletConfiguration` type. Flags are layered on top of an internal
object that has undergone conversion from a defaulted external object,
which means that previously the mere registration of flags would have
overwritten any previously-defined defaults for `NodeLabels` (fortunately
there were none).
2017-11-16 18:35:13 -08:00
Yanqiang Miao 16aa5820fb Correct clean up actions in e2e tests 2017-11-17 08:46:21 +08:00
David Ashpole 8f3e2f315e fix flaky allocatable test 2017-11-16 11:16:58 -08:00
Connor Doyle 80ac705ef3 Removed opaque integer resources. 2017-11-16 10:47:40 -08:00
Krzysztof Jastrzebski a5446bedf9 Adds node auto-repair e2e tests. 2017-11-16 18:57:25 +01:00
Mike Danese 0117006a54
Revert "Add options for mounting SCSI or NVMe local SSD though Block or Filesystem and do all of that with UUID" 2017-11-16 07:51:38 -08:00
Kubernetes Submit Queue ff5cea4b43
Merge pull request #55868 from shyamjvs/kubemark-resource-gatherer-fix
Automatic merge from submit-queue (batch tested with PRs 55868, 55393, 55152, 55849). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Set resource-gathering and probe-duration period for kubemark

Ref https://github.com/kubernetes/kubernetes/issues/55818#issuecomment-344888480

/cc @porridge 
fyi - @jiayingz
2017-11-16 06:32:16 -08:00
Nikita Komarov c77923d0fe LimitRange e2e test improved. 2017-11-16 16:46:41 +03:00
Kubernetes Submit Queue fbcb199fe5
Merge pull request #55865 from krzysztof-jastrzebski/e2e9
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Stop executing Pod Priority and Preemption e2e tests on GKE.
2017-11-16 04:38:26 -08:00
Kubernetes Submit Queue c2dd10e263
Merge pull request #51905 from jsafrane/mount-propagation-test
Automatic merge from submit-queue (batch tested with PRs 55697, 55631, 51905, 55647, 55826). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add e2e test for mount propagation

**What this PR does / why we need it**:
This adds e2e test for mount propagation introduced by #46444.

@kubernetes/sig-node-pr-reviews 
/sig node

**Release note**:
```release-note
None
```
2017-11-16 03:57:30 -08:00
Kubernetes Submit Queue 7db195cc0f
Merge pull request #55697 from fisherxu/e2efix
Automatic merge from submit-queue (batch tested with PRs 55697, 55631, 51905, 55647, 55826). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix failed to access service of e2e test

**What this PR does / why we need it**:
We should create service before deployments as said in the issue.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes [#55696](https://github.com/kubernetes/kubernetes/issues/55696)

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-11-16 03:57:22 -08:00
Kubernetes Submit Queue 779105673a
Merge pull request #55188 from mindprince/accelerator-monitoring
Automatic merge from submit-queue (batch tested with PRs 55798, 49579, 54862, 55188, 51990). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add monitoring support for hardware accelerators

Currently only NVIDIA GPU monitoring is implemented.

Feature repo issue: https://github.com/kubernetes/features/issues/369
cAdvisor PR: https://github.com/google/cadvisor/pull/1762

/kind feature
/sig node
/sig instrumentation
/area hw-accelerators

**Release note**:
```release-note
Kubelet now exposes metrics for NVIDIA GPUs attached to the containers.
```
2017-11-16 03:09:21 -08:00
Kubernetes Submit Queue f9ce9d9da6
Merge pull request #55798 from shyamjvs/exclude-for-scale-tests-tag
Automatic merge from submit-queue (batch tested with PRs 55798, 49579, 54862, 55188, 51990). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add special tag for disabling ESIPP and HPA-related tests on large clusters

As discussed offline, this would help improve accountability for tests needing some love from scalability perspective.

/cc @porridge 
fyi - @MrHohn @MaciekPytel @mwielgus @crassirostris 

@kubernetes/sig-scalability-misc
2017-11-16 03:09:07 -08:00
Shyam Jeedigunta 1ae56bbe2b Set resource-gathering and probe-duration period for kubemark 2017-11-16 12:02:56 +01:00
Krzysztof Jastrzebski a8f8e16694 Stop executing Pod Priority and Preemption e2e tests on GKE. 2017-11-16 11:27:48 +01:00
Kubernetes Submit Queue ee2cf0bb5d
Merge pull request #55782 from x13n/addon-manager
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Bump addon manager version used to 6.5

**What this PR does / why we need it**:
Bump addon manager version to use #55466. This adds leader election-like mechanism to addon manager.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:

**Special notes for your reviewer**:
Release note copied from #55466. This is intended to be cherrypicked into 1.7 and 1.8 branches.

**Release note**:

```release-note
Addon manager supports HA masters.
```
2017-11-16 00:55:58 -08:00
Kubernetes Submit Queue d73157ba97
Merge pull request #55444 from msau42/multi-e2e
Automatic merge from submit-queue (batch tested with PRs 55682, 55444, 55456, 55717, 55131). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add sig storage label to multizone static PV test

**What this PR does / why we need it**:
Adds sig storage tag to e2e test so it shows up on our testgrid dashboard

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-11-15 23:06:10 -08:00
Kubernetes Submit Queue b3a1867529
Merge pull request #55764 from Random-Liu/wait-server-resources
Automatic merge from submit-queue (batch tested with PRs 55764, 55683, 55468, 54409, 55546). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Wait for server resources.

For https://github.com/kubernetes/kubernetes/issues/55768.

In e2e test for containerd, I sometimes see the following fail (e.g. https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-cri-containerd-e2e-gci-gce/178):
```
Nov 15 02:40:31.291: Couldn't delete ns: "e2e-tests-container-probe-dcvlw": unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server could not find the requested resource (&discovery.ErrGroupDiscoveryFailed{Groups:map[schema.GroupVersion]error{schema.GroupVersion{Group:"metrics.k8s.io", Version:"v1beta1"}:(*errors.StatusError)(0xc420bfd170)}})
```
Usually, only the first few tests fail with this error. The error seems to be returned at this line https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/util.go#L1170.

@cheftako @caesarxuchao Does this change make sense to you? Or should I wait for something else to become ready?
/cc @kubernetes/sig-api-machinery-pr-reviews 

**Release note**:

```release-note
none
```
2017-11-15 22:15:52 -08:00
Kubernetes Submit Queue c3ed0f2663
Merge pull request #53466 from davidz627/localSSDUUID
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add options for mounting SCSI or NVMe local SSD though Block or Filesystem and do all of that with UUID

Fixes: #51431

Mount SCSI local SSD by UUID in /mnt/disks/by-uuid/, also allows for users to request and mount NVMe disks. Both types of disks will be accessable either through block or filesystem

To see code in progress for NVMe and block support see working branch: https://github.com/davidz627/kubernetes/tree/localExt
2017-11-15 18:25:30 -08:00
Lantao Liu e504e5a316 Wait for server resources. 2017-11-16 01:38:35 +00:00
Lantao Liu 0085e2208d Rename log-dump env to `LOG_DUMP_SYSTEMD_SERVICES`. 2017-11-16 00:41:27 +00:00
Kubernetes Submit Queue ded83878c1
Merge pull request #55820 from shyamjvs/restore-resource-gatherer-pollperiod-default
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Restore default polling period of resource-gatherer

Fixes https://github.com/kubernetes/kubernetes/issues/55818

/cc @jiayingz @mindprince
2017-11-15 16:03:06 -08:00
Shyam Jeedigunta a350825612 Restore default polling period of resource-gatherer 2017-11-15 23:15:28 +01:00
Kubernetes Submit Queue cbdd18eee9
Merge pull request #55484 from bskiba/multizone-size-e2e
Automatic merge from submit-queue (batch tested with PRs 54436, 53148, 55153, 55614, 55484). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Support multizone clusters in GCE and GKE e2e tests

**What this PR does / why we need it**:
For multi-zone clusters we can't rely on zone parameter for fetching information on Instance Groups. Instead we first fetch the zone the group is in to use in subsequent calls.

Note that current version of the code does not work for multi zone clusters at all.

**Release note**:
```
NONE
```
2017-11-15 12:58:11 -08:00
Kubernetes Submit Queue a15fde49b4
Merge pull request #55639 from yguo0905/cloud-init
Automatic merge from submit-queue (batch tested with PRs 55648, 55274, 54982, 51955, 55639). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Node e2e: add a cloud-init script to disable live-restore in node e2e test

This cloud-init config will be used in tests in https://github.com/kubernetes/test-infra.

**Release note**:

```
None
```

/assign @yujuhong 
/cc @abgworrall @dchen1107
2017-11-15 12:03:44 -08:00
Kubernetes Submit Queue 9058769dad
Merge pull request #51955 from danwinship/update-networkpolicy-storage
Automatic merge from submit-queue (batch tested with PRs 55648, 55274, 54982, 51955, 55639). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Swap NetworkPolicy storage to networking.k8s.io/v1

Finishes(?) the NetworkPolicy v1 migration.
Fixes #50604

The integration test passes. I copied the test-update-storage-objects.sh change from #50327 and have no idea if it's right.

/cc @sttts @caesarxuchao @thockin

**Release note**:
```release-note
```
2017-11-15 12:03:40 -08:00
Kubernetes Submit Queue c339a54b53
Merge pull request #55659 from CaoShuFeng/duplicated_import
Automatic merge from submit-queue (batch tested with PRs 53780, 55663, 55321, 52421, 55659). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

remove duplicated import

**Release note**:
```release-note
NONE
```
2017-11-15 09:30:40 -08:00
Kubernetes Submit Queue b623026d2a
Merge pull request #52421 from WIZARD-CXY/fixpredicate
Automatic merge from submit-queue (batch tested with PRs 53780, 55663, 55321, 52421, 55659). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

add hostip and protocol to the hostport predicates

**What this PR does / why we need it**:
This PR adds "hostIP and protocol" to scheduler hostport predicate procedure
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fix #51950 
**Special notes for your reviewer**:
- [x] basic implementation, need review
- [x] e2e test
- [x] update doc (will be done in seperate PR)

**Release note**:

```release-note
add hostIP and protocol to the original hostport predicates procedure in scheduler.
```
2017-11-15 09:30:36 -08:00
Shyam Jeedigunta d08a14819c Add special tag for disabling ESIPP and HPA-related tests on large clusters 2017-11-15 14:35:44 +01:00
Daniel Kłobuszewski c2ec85e064 Bump addon manager version used to 6.5 2017-11-15 11:34:46 +01:00
Marcin Owsiany 9b6590e7ae Improve messages around waiting for pods. 2017-11-15 11:29:52 +01:00
Kubernetes Submit Queue ebe8ea73fd
Merge pull request #54463 from saad-ali/volumeAttachmentAPI
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Introduce new `VolumeAttachment` API Object

**What this PR does / why we need it**:

Introduce a new `VolumeAttachment` API Object. This object will be used by the CSI volume plugin to enable external attachers (see design [here](https://github.com/kubernetes/community/pull/1258). In the future, existing volume plugins can be refactored to use this object as well.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:  Part of issue https://github.com/kubernetes/features/issues/178

**Special notes for your reviewer**:
None

**Release note**:

```release-note
NONE
```
2017-11-14 22:05:27 -08:00
Yang Guo 7eb7cfe3ef Add a cloud-init script to disable live-restore 2017-11-14 21:40:13 -08:00
David Zhu 028258244c Set up alternate mount point for SCSI/NVMe local SSD by UUID in /mnt/disks/by-uuid/, set up ability to have unformatted disk symlinked in /dev/disk/by-uuid/. Added tests. Preserved backwards compatibility. 2017-11-14 17:14:41 -08:00
Saad Ali d96c105d71 Introduce storage v1alpha1 and VolumeAttachment
Introduce the v1alpha1 version to the Kubernetes storage API. And add a
new VolumeAttachment object to that version. This object will initially
be used only by the new CSI Volume Plugin. Eventually existing volume
plugins can be refactored to use it too.
2017-11-14 17:08:48 -08:00
Rohit Agarwal 3ac94a57eb Update URLs for nvidia gpu device plugin and nvidia driver installer.
Device plugin is now an addon and its manifest is now in
kubernetes/kubernetes. The manifest on
GoogleCloudPlatform/container-engine-accelerators no longer contains
device plugin.
2017-11-14 15:31:22 -08:00
Dan Winship d2a3af9b58 Swap NetworkPolicy storage to networking.k8s.io/v1 2017-11-14 15:15:01 -05:00
Janet Kuo 6432422307 Webhook e2e test: fail open and fail closed 2017-11-14 12:11:46 -08:00
David Ashpole 220edbc6e3 disable container disk metrics when using the CRI stats integration 2017-11-14 11:43:08 -08:00
Dane LeBlanc 2827b7ffb7 Add brackets around IPv6 addrs in e2e test IP:port endpoints
There are several locations in the e2e tests where endpoints of the
form IP:port use IPv6 addresses directly, without surrounding brackets.
Brackets are required around IPv6 addresses in this case, in order to
distinguish the colons in the IPv6 address from the colon immediately
preceding the port.

Also, wherever the curl command might be used with an IPv6 address
surrounded in brackets, the "-g" argument is added to the curl
command line arguments so that the brackets can be interpreted
correctly.

fixes #52746
2017-11-14 10:55:09 -05:00
Kubernetes Submit Queue 48d062722b
Merge pull request #55605 from bskiba/e2e-fix
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Regional support in Cluster Autoscaler e2e tests.

**What this PR does / why we need it**:

When calling GKE API and gcloud in Autoscaling e2 tests, take into account that clusters can be regional.
This currently uses MultiZonal as an indicator that cluster is regional, which is suboptimal, but considering that our tests do not work with multizonal clusters at the moment, there is no regression.This should be changed once there is an indicator available that the cluster is regional.

**Release note**:
```
NONE
```
2017-11-14 05:13:03 -08:00
Dr. Stefan Schimanski 3ba9d1d0e0 Remove unused pkg/apis/policy/v1alpha1 2017-11-14 13:47:29 +01:00
fisherxu fe033a4714 fix failed to access service of e2e test 2017-11-14 19:21:59 +08:00
Cao Shufeng 86968e44d0 remove duplicated import 2017-11-14 17:18:17 +08:00
Jan Safranek 4e9068b135 Review fixes 2017-11-14 10:16:30 +01:00
Jan Safranek a59af81e5e Add e2e test for mount propagation 2017-11-14 10:16:30 +01:00
Kubernetes Submit Queue ea66c00522
Merge pull request #54509 from vmware/node_poweroff_test
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

E2E test to verify pod failover during node power-off

**What this PR does / why we need it**:

This PR adds test to verify volume status after the node where the pod got provisioned being powered off and failed over to a different node.

Test performs following tasks:

1. Create a StorageClass
2. Create a PVC with the StorageClass
3. Create a Deployment with 1 replica, using the PVC
4. Verify the pod got provisioned on a node
5. Verify the volume is attached to the node
6. Power off the node where pod got provisioned
7. Verify the pod got provisioned on a different node
8. Verify the volume is attached to the new node
9. Verify the volume is detached from the previous node
10. Power on the previous node
11. Delete the Deployment
12. Delete the PVC
13. Delete the StorageClass

**Which issue this PR fixes**:

Fixes https://github.com/vmware/kubernetes/issues/272

**Special notes for your reviewer**:

Test logs:
```
# go run hack/e2e.go --check-version-skew=false --v --test --test_args='--ginkgo.focus=Node\sPoweroff'
flag provided but not defined: -check-version-skew
Usage of /tmp/go-build212295472/command-line-arguments/_obj/exe/e2e:
  -get
                go get -u kubetest if old or not installed (default true)
  -old duration
                Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/24 11:48:28 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/24 11:48:28 e2e.go:56:   Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/24 11:48:28 e2e.go:57:   The separator is required to use --get or --old flags
2017/10/24 11:48:28 e2e.go:58:   The -- flag separator also suppresses this message
2017/10/24 11:48:28 e2e.go:77: Calling kubetest --check-version-skew=false --v --test --test_args=--ginkgo.focus=Node\sPoweroff...
2017/10/24 11:48:28 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/24 11:48:28 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 350.700421ms
2017/10/24 11:48:28 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1627+54fc02df4a3a2a", GitCommit:"54fc02df4a3a2a12e14fb72d84a1aaa658ba6689", GitTreeState:"clean", BuildDate:"2017-10-24T18:33:37Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1437+ba66fcb63de9e9", GitCommit:"ba66fcb63de9e9b72e2ccf8b823df33a22df0522", GitTreeState:"clean", BuildDate:"2017-10-20T07:16:05Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}
2017/10/24 11:48:28 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 315.334518ms
2017/10/24 11:48:28 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=Node\sPoweroff
Conformance test: not doing test setup.
Oct 24 11:48:30.391: INFO: Overriding default scale value of zero to 1
Oct 24 11:48:30.391: INFO: Overriding default milliseconds value of zero to 5000
I1024 11:48:30.637436     409 e2e.go:378] Starting e2e run "ed9fdfc7-b8eb-11e7-a595-0050569c26b8" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508870909 - Will randomize all specs
Will run 1 of 717 specs
 
Oct 24 11:48:30.678: INFO: >>> kubeConfig: /root/.kube/config
Oct 24 11:48:30.685: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 24 11:48:30.719: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 24 11:48:30.857: INFO: 17 / 17 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 24 11:48:30.857: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 24 11:48:30.863: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 24 11:48:30.863: INFO: Dumping network health container logs from all nodes...
Oct 24 11:48:30.877: INFO: Client version: v1.9.0-alpha.1.1627+54fc02df4a3a2a
Oct 24 11:48:30.879: INFO: Server version: v1.9.0-alpha.1.1437+ba66fcb63de9e9
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
  verify volume status after node power off
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:149
[BeforeEach] [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 24 11:48:30.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:64
Oct 24 11:48:30.984: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
[It] verify volume status after node power off
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:149
STEP: Creating a Storage Class
STEP: Creating PVC using the Storage Class
STEP: Waiting for PVC to be in bound phase
Oct 24 11:48:31.141: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-zxz56 to have phase Bound
Oct 24 11:48:31.150: INFO: PersistentVolumeClaim pvc-zxz56 found but phase is Pending instead of Bound.
Oct 24 11:48:33.155: INFO: PersistentVolumeClaim pvc-zxz56 found and phase=Bound (2.013403698s)
STEP: Creating a Deployment
I1024 11:48:33.180161     409 deployment_util.go:254] Waiting deployment "deployment-ef6b820e-b8eb-11e7-a595-0050569c26b8" to complete
Oct 24 11:48:33.192: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1beta1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Oct 24 11:48:35.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:37.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:39.196: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:41.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:43.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:45.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:47.198: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:49.198: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:51.196: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:53.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
STEP: Get pod from the deployement
STEP: Verify disk is attached to the node: kubernetes-node5
STEP: Power off the node: kubernetes-node5
Oct 24 11:49:07.337: INFO: Waiting for pod to be failed over from "kubernetes-node5"
Oct 24 11:49:17.336: INFO: Waiting for pod to be failed over from "kubernetes-node5"
Oct 24 11:49:27.340: INFO: Waiting for pod to be failed over from "kubernetes-node5"
Oct 24 11:49:37.340: INFO: The pod has been failed over from "kubernetes-node5" to "kubernetes-node7"
STEP: Waiting for disk to be attached to the new node: kubernetes-node7
Oct 24 11:49:47.534: INFO: Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" has successfully attached to "kubernetes-node7".
STEP: Waiting for disk to be detached from the previous node: kubernetes-node5
Oct 24 11:49:57.707: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:07.702: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:17.710: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:27.733: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:37.713: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:47.723: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:57.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:07.710: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:17.719: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:27.716: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:37.717: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:47.712: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:57.707: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:07.724: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:17.716: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:27.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:37.716: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:47.709: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:57.714: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:07.715: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:17.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:27.714: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:37.713: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:47.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:57.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:07.712: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:17.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:27.712: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:37.707: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:47.698: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:57.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:07.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:17.699: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:27.702: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:37.704: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:47.703: INFO: Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" has successfully detached from "kubernetes-node5".
STEP: Power on the previous node: kubernetes-node5
Oct 24 11:55:49.168: INFO: Deleting PersistentVolumeClaim "pvc-zxz56"
[AfterEach] [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 24 11:55:49.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-node-poweroff-l245b" for this suite.
Oct 24 11:55:57.630: INFO: namespace: e2e-tests-node-poweroff-l245b, resource: bindings, ignored listing per whitelist
Oct 24 11:55:57.643: INFO: namespace e2e-tests-node-poweroff-l245b deletion completed in 8.379395732s
 
• [SLOW TEST:446.758 seconds]
[sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
  verify volume status after node power off
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:149
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 24 11:55:57.647: INFO: Running AfterSuite actions on all node
Oct 24 11:55:57.647: INFO: Running AfterSuite actions on node 1
 
Ran 1 of 717 Specs in 446.969 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 716 Skipped PASS
 
Ginkgo ran 1 suite in 7m27.797177022s
Test Suite Passed
2017/10/24 11:55:57 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=Node\sPoweroff' finished in 7m28.760818768s
2017/10/24 11:55:57 e2e.go:81: Done
```
VMware Reviewers: @divyenpatel @pshahzeb 

**Release note**:

```release-note
NONE
```
2017-11-14 00:56:26 -08:00
Kubernetes Submit Queue c1cd70ad16
Merge pull request #55533 from janetkuo/hook-e2e-multi
Automatic merge from submit-queue (batch tested with PRs 55009, 55532, 55601, 52569, 55533). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Webhook e2e test: PUT and PATCH operations

**What this PR does / why we need it**:

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Ref: https://github.com/kubernetes/features/issues/492

**Special notes for your reviewer**: ~depends on #55127~ (merged)
@kubernetes/sig-api-machinery-api-reviews 

**Release note**:

```release-note
NONE
```
2017-11-14 00:10:01 -08:00
Kubernetes Submit Queue 3479549a62
Merge pull request #55532 from ianchakeres/validate-greater-than-zero-pv-pvc
Automatic merge from submit-queue (batch tested with PRs 55009, 55532, 55601, 52569, 55533). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Validate that PV capacity and PVC capacity requests are positive, greater than 0

**What this PR does / why we need it**:  Zero (0) capacity PVs cause related pods to fail, and zero (0) capacity PVCs create zero (0) capacity PVs.

**Which issue(s) this PR fixes** :
Fixes #55553

**Special notes for your reviewer**:

**Release note**:

```release-note
Validate positive capacity for PVs and PVCs.
```
2017-11-14 00:09:48 -08:00
Kubernetes Submit Queue 51c8e9294b
Merge pull request #55009 from bradtopol/addhosteventsemptyconform2
Automatic merge from submit-queue (batch tested with PRs 55009, 55532, 55601, 52569, 55533). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add empty dir and host related conformance annotations

Signed-off-by: Brad Topol <btopol@us.ibm.com>

Add empty dir and host related conformance annotations

/sig testing
/area conformance
@sig-testing-pr-reviews

This PR adds pod related conformance annotations to the e2e test suite.

The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the empty dir and host based e2e conformance tests.

Special notes for your reviewer:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.



**Release note**:

```release-note NONE

```
2017-11-14 00:09:45 -08:00
Shaomin Chen 3db4f2b843 E2E test to verify pod failover during node power-off 2017-11-13 21:52:54 -08:00
Kubernetes Submit Queue 710523ed7d
Merge pull request #53541 from jiayingz/e2e-stats
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Extend test/e2e/scheduling/nvidia-gpus.go to track resource usage of

installer and device plugin containers.
To support this, exports certain functions and fields in
framework/resource_usage_gatherer.go so that it can be used in any
e2e test to track any specified pod resource usage with the specified
probe interval and duration.



**What this PR does / why we need it**:
We need to quantify the resource usage of the device plugin DaemonSet to make sure it can run reliably on nodes with GPUs.
We also want to measure gpu driver installer resource usage to track any unexpected resource consumption during driver installation.
For the later part, see a related issue https://github.com/kubernetes/features/issues/368.

Example resource summary output:
Oct  6 12:35:07.289: INFO: Printing summary: ResourceUsageSummary
Oct  6 12:35:07.289: INFO: ResourceUsageSummary JSON
{
  "100": [
    {
      "Name": "nvidia-device-plugin-6kqxp/nvidia-device-plugin",
      "Cpu": 0.000507167,
      "Mem": 2134016
    },
    {
      "Name": "nvidia-device-plugin-6kqxp/nvidia-driver-installer",
      "Cpu": 1.915508718,
      "Mem": 663330816
    },
    {
      "Name": "nvidia-device-plugin-l28zc/nvidia-device-plugin",
      "Cpu": 0.000836256,
      "Mem": 2211840
    },
    {
      "Name": "nvidia-device-plugin-l28zc/nvidia-driver-installer",
      "Cpu": 1.916886293,
      "Mem": 691449856
    },
    {
      "Name": "nvidia-device-plugin-xb4vh/nvidia-device-plugin",
      "Cpu": 0.000515103,
      "Mem": 2265088
    },
    {
      "Name": "nvidia-device-plugin-xb4vh/nvidia-driver-installer",
      "Cpu": 1.909435982,
      "Mem": 832430080
    }
  ],
  "50": [
    {
...

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-11-13 21:51:16 -08:00
xiangpengzhao eddd9a208f Update conformance testdata for downward api test 2017-11-14 09:53:31 +08:00
xiangpengzhao 4ac61e1d12 Combine downward api e2e test cases. 2017-11-14 09:51:35 +08:00
Janet Kuo 7ffaa06ab3 Webhook e2e test: PUT and PATCH operations 2017-11-13 16:50:51 -08:00
Kubernetes Submit Queue cba5aa0590
Merge pull request #55127 from caesarxuchao/webhook-do-conversion
Automatic merge from submit-queue (batch tested with PRs 54005, 55127, 53850, 55486, 53440). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Validation webhook plugin converts objects to the external version before sending to webhooks

**What this PR does / why we need it**:


**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:

https://github.com/kubernetes/features/issues/492

**Special notes for your reviewer**:

**Release note**:

```release-note
The apiserver sends external versioned object to the admission webhooks now. Please update the webhooks to expect admissionReview.spec.object.raw to be serialized external versions of objects. 
```
2017-11-13 16:45:22 -08:00
Jiaying Zhang ae36f8ee95 Extend test/e2e/scheduling/nvidia-gpus.go to track resource usage of
installer and device plugin containers.
To support this, exports certain functions and fields in
framework/resource_usage_gatherer.go so that it can be used in any
e2e test to track any specified pod resource usage with the specified
probe interval and duration.
2017-11-13 16:24:41 -08:00
Kubernetes Submit Queue beefab8a8e
Merge pull request #54825 from bradtopol/adddownwarddockerconf
Automatic merge from submit-queue (batch tested with PRs 54826, 53576, 55591, 54946, 54825). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add downward api and docker container conformance annotations

Signed-off-by: Brad Topol <btopol@us.ibm.com>
Add downward api and docker container conformance annotations

/sig testing
/area conformance
@sig-testing-pr-reviews

This PR adds downward api and docker container related conformance annotations to the e2e test suite.

The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the downward api and docker container based e2e conformance tests.

Special notes for your reviewer:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.
**Release note**:

```release-note NONE
```
2017-11-13 14:47:08 -08:00
Kubernetes Submit Queue 6e2e5bac40
Merge pull request #54946 from bradtopol/adddnscrdcmprobeconform
Automatic merge from submit-queue (batch tested with PRs 54826, 53576, 55591, 54946, 54825). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add dns, configmap, and custom resource definition conformance

annotations.

Signed-off-by: Brad Topol <btopol@us.ibm.com>
Add dns, configmap, and custom resource definition related conformance annotations

/sig testing
/area conformance
@sig-testing-pr-reviews

This PR adds pod related conformance annotations to the e2e test suite.

The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the dns, configmap, and custom resource definition based e2e conformance tests.
Special notes for your reviewer:

Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.




**Release note**:

```release-note NONE

```
2017-11-13 14:47:05 -08:00
Chao Xu ab053a224d let validation webhook convert objects to the external version before sending them 2017-11-13 12:55:33 -08:00
Kubernetes Submit Queue 74ec8d0fe8
Merge pull request #55288 from Random-Liu/e2e-log-for-alternative-runtime
Automatic merge from submit-queue (batch tested with PRs 55283, 55461, 55288, 53970, 55487). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Support collecting log for alternative container runtime in e2e test.

Fixes https://github.com/kubernetes/kubernetes/issues/55629.

Add support to collect logs for alternative container runtime in e2e.
Example for `cri-containerd`:
```
$ go run hack/e2e.go -- --test -v --test_args="--report-dir=$PWD --container-runtime-services=cri-containerd,containerd,cri-containerd-installation"
```

```release-note
none
```

/cc @kubernetes/sig-node-pr-reviews @kubernetes/sig-testing-pr-reviews
2017-11-13 12:32:24 -08:00
Ian Chakeres 98e2c8cdee Validate that PV capacity and PVC capacity requests are greater than zero 2017-11-13 08:57:01 -08:00
Beata Skiba 3431411e79 Regional support in CA tests.
When calling GKE API andd gcloud, take into account
that clusters can be regional.
This currently uses MultiZonal as an indicator that
cluster is regional, which is suboptimal, but considering
that our tests do not work with multizonal clusters
at the moment, there is no regression.
This should be changed once there is an indicator available
that the cluster is regional.
2017-11-13 16:06:41 +01:00
Kubernetes Submit Queue 52e712913d
Merge pull request #55478 from kawych/e2e
Automatic merge from submit-queue (batch tested with PRs 55594, 47849, 54692, 55478, 54133). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Use HPA permissions to read custom metrics in Custom Metrics e2e test

**What this PR does / why we need it**:
This PR fixes e2e test for Stackdriver Custom Metrics on GKE. With PR: https://github.com/kubernetes/kubernetes/pull/55387 it will be also necessary for analogous test on GCE.

**Release note**:
```release-note
NONE
```
2017-11-13 06:09:27 -08:00
Kubernetes Submit Queue fd3de96be6
Merge pull request #55594 from krzysztof-jastrzebski/e2e6
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix typo in e2e test name.
2017-11-13 05:20:34 -08:00
Kubernetes Submit Queue 41fe3ed5bc
Merge pull request #54405 from resouer/clean-docker-dep
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

[Part 1] Remove docker dep in kubelet startup

**What this PR does / why we need it**:

Remove dependency of docker during kubelet start up.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: 

Part 1 of #54090 

**Special notes for your reviewer**:
Changes include:

1. Move docker client initialization into dockershim pkg.
2. Pass a docker `ClientConfig` from kubelet to dockershim
3. Pass parameters needed by `FakeDockerClient` thru `ClientConfig` to dockershim

(TODO, the second part) Make dockershim tolerate when dockerd is down, otherwise it will still fail kubelet

Please note after this PR, kubelet will still fail if dockerd is down, this will be fixed in the subsequent PR by making dockershim tolerate dockerd failure (initializing docker client in a separate goroutine), and refactoring cgroup and log driver detection. 

**Release note**:

```release-note
Remove docker dependency during kubelet start up 
```
2017-11-13 03:59:53 -08:00
Krzysztof Jastrzebski ee5e6d85de Fix typo in e2e test name. 2017-11-13 10:06:35 +01:00
Kubernetes Submit Queue 91615e4fd9
Merge pull request #49258 from xiangpengzhao/fix-dup-port-panic
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Check dup NodePort with protocols when update services

**What this PR does / why we need it**:
As the title says.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #48579 fixes: #54898 fixes: #55327

**Special notes for your reviewer**:
/assign @freehan 
/cc @cblecker 

**Release note**:

```release-note
NONE
```
2017-11-12 22:53:38 -08:00
Kubernetes Submit Queue e93819049d
Merge pull request #54889 from lavalamp/wh-api
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix webhook API to also support URLs

ref: https://github.com/kubernetes/features/issues/492

```release-note
The dynamic admission webhook now supports a URL in addition to a service reference, to accommodate out-of-cluster webhooks.
```
2017-11-11 23:01:39 -08:00
Daniel Smith a0cb2ce697 Add URL beside service 2017-11-11 16:09:34 -08:00
Kubernetes Submit Queue 858f3cbf59
Merge pull request #55503 from mml/conformance
Automatic merge from submit-queue (batch tested with PRs 52461, 55503). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

A few improvements to the conformance regtesst

- Set OWNERS files to disallow parent approvers (doesn't work yet, but should be live next week.)
- Document how to fix failing test.
- Add a better error message.

```release-note
NONE
```
2017-11-11 15:21:32 -08:00
Kubernetes Submit Queue dbcab6d744
Merge pull request #55510 from yguo0905/use-whitelisted-test-image
Automatic merge from submit-queue (batch tested with PRs 54460, 55258, 54858, 55506, 55510). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Use whitelisted test image in Docker live-restore node e2e test

**What this PR does / why we need it**:

This PR fixes this test:

`[k8s.io] Docker features [Feature:Docker] when live-restore is enabled [Serial] [Slow] [Disruptive] containers should not be disrupted when the daemon shuts down and restarts`

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2enode-cosbeta-k8sdev-serial/1199#k8sio-docker-features-featuredocker-when-live-restore-is-enabled-serial-slow-disruptive-containers-should-not-be-disrupted-when-the-daemon-shuts-down-and-restarts

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```
None
```

/assign @yujuhong
2017-11-11 10:45:30 -08:00
Kubernetes Submit Queue e52e79342c
Merge pull request #54727 from caesarxuchao/namespaceSelector
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add namespace selector to admission webhook

Implementing the [design](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/admission-webhook-bootstrapping.md).

* Added the NamespaceSelector field to the webhook configuration API
* Let the webhook plugin respect the NamespaceSelector
* Added unit test and e2e test

cc @kubernetes/sig-api-machinery-api-reviews 

```release-note
Added namespaceSelector to externalAdmissionWebhook configuration to allow applying webhooks only to objects in the namespaces that have matching labels.
```
2017-11-11 07:50:32 -08:00
Kubernetes Submit Queue fe599c7dcf
Merge pull request #54992 from porridge/perf-timing
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add performance test phase timing export.

**What this PR does / why we need it**:

First step totwards allowing us to get a quick overview of test length
via perf-dash.k8s.io.

**Release note**:
```release-note
NONE
```

@kubernetes/sig-scalability-feature-requests
2017-11-11 01:39:30 -08:00
Kubernetes Submit Queue fdea39d158
Merge pull request #54386 from yanxuean/testfmt
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

missing the format string

Signed-off-by: yanxuean <yan.xuean@zte.com.cn>

**What this PR does / why we need it**:
missing the format string
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:
```
NONE
```
2017-11-10 18:50:20 -08:00
Rohit Agarwal 9c38abd482 Expose accelerator metrics in the summary API. 2017-11-10 14:59:43 -08:00
Yang Guo ed8cd396dd Use whitelisted test image 2017-11-10 14:16:27 -08:00
Chao Xu 7006d224be add NamespaceSelector to the api
business logic in webhook plugin and unit test

add a e2e test for namespace selector
2017-11-10 13:40:16 -08:00
Matt Liggett 3483447ebc Refer to instructions when the test fails. 2017-11-10 11:04:55 -08:00
Matt Liggett 13f3844ef5 Add README.md to test/conformance. 2017-11-10 11:02:11 -08:00
Matt Liggett 97e669abdf Disallow parent approvals. 2017-11-10 10:54:23 -08:00
Lantao Liu 32c4295bcf Support collecting log for alternative container runtime in e2e test. 2017-11-10 18:46:48 +00:00
Yang Guo 8ea9417a37 Adjust GKE spec to validate images with kernel version 4.10+ 2017-11-10 09:47:08 -08:00
Ryan Phillips 66965daf56 bump base images to debian stretch 2017-11-10 09:54:10 -06:00
Kubernetes Submit Queue ae2edc439e
Merge pull request #55413 from liggitt/internal-autoscaling
Automatic merge from submit-queue (batch tested with PRs 53047, 54861, 55413, 55395, 55308). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Switch internal scale type to autoscaling, enable apps/v1 scale subresources

xref #49504

* Switch workload internal scale type to autoscaling.Scale (internal-only change)
* Enable scale subresources for apps/v1 deployments, replicasets, statefulsets

```release-note
NONE
```
2017-11-10 07:00:44 -08:00
Karol Wychowaniec 770dacde45 Use HPA permissions to read custom metrics in Custom Metrics e2e test 2017-11-10 15:59:46 +01:00
Beata Skiba def49db058 Support multizone clusters in GCE and GKE e2e tests 2017-11-10 15:29:15 +01:00
Kubernetes Submit Queue 7c04a684ae
Merge pull request #55412 from loburm/fix-infludb-e2e
Automatic merge from submit-queue (batch tested with PRs 55394, 55412). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix influxdb e2e test failure.

In scalability testing influxdb was recently disabled, but we still
trying to execute corresponidng test, as a result it fails all the time.
Skip test if influxdb is disabled.

Fixes #54636 

```release-note
NONE
```
2017-11-10 04:32:21 -08:00
Kubernetes Submit Queue c0e111a21c
Merge pull request #55394 from krzysztof-jastrzebski/e2e6
Automatic merge from submit-queue (batch tested with PRs 55394, 55412). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Adds e2e tests for Pod Priority and Preemption in Cluster Autoscaler

This PR adds e2e tests for Pod Priority and Preemption in Clucter Autoscaler:
 - shouldn't scale up when expendable pod is created
 - should scale up when non expendable pod is created
 - shouldn't scale up when expendable pod is preempted
 - should scale down when expendable pod is running
 - shouldn't scale down when non expendable pod is running
2017-11-10 04:32:18 -08:00
Penghao Cen 22b04c828b Append --feature-gates option iff TestContext.FeatureGates is not nil 2017-11-10 19:42:22 +08:00
Kubernetes Submit Queue 1d5dff0e05
Merge pull request #55426 from shyamjvs/disable-service-e2e-for-large-cluster
Automatic merge from submit-queue (batch tested with PRs 46581, 55426, 54849). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Disable service e2e test related to LB for huge clusters

Based on https://github.com/kubernetes/kubernetes/issues/52495#issuecomment-343263564

/cc @MrHohn
2017-11-10 03:30:18 -08:00
Kubernetes Submit Queue c7644dd104
Merge pull request #46581 from m1093782566/fix-net-perf
Automatic merge from submit-queue (batch tested with PRs 46581, 55426, 54849). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

fix newline in raw string in e2e net perf case

**Which issue this PR fixes** 

fixes #46083
2017-11-10 03:30:15 -08:00
Kubernetes Submit Queue 0a33cec59a
Merge pull request #54092 from vmware/volume_perf_test
Automatic merge from submit-queue (batch tested with PRs 55265, 54092, 55353, 53733, 55385). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

E2E Performance test to print latency numbers for vsphere volume lifecycle operations

**What this PR does / why we need it**:
This PR introduces test that prints latency numbers for volume lifecycle operations.
The operations that are evaluated are:
1. Create n number of PVCs
2. Create pods with these PVCs and ensure pods are in ready state
3. Delete pods
4. Delete the PVCs

**Which issue this PR fixes** : fixes vmware#292

**Special notes for your reviewer**:

1. This PR has some duplicate code changes from existing open PRs to add e2e tests. If those PRs are merged before, I ll rebase this PR to avoid redundant changes.
2. Following are the test logs with total number of volumes as 12, volumes per pod as 4 and total iterations of test to be 3.

<details>

Test logs:
```
pshahzeb-m01:kubernetes_2 pshahzeb$ go run hack/e2e.go --check-version-skew=false -v -test --test_args='--ginkgo.focus=vcp-performance'
flag provided but not defined: -check-version-skew
Usage of /var/folders/97/lnlv1n317xl2ty8hdn7zptxr00b37m/T/go-build041717622/command-line-arguments/_obj/exe/e2e:
  -get
    	go get -u kubetest if old or not installed (default true)
  -old duration
    	Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/16 15:11:29 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/16 15:11:29 e2e.go:56:   Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/16 15:11:29 e2e.go:57:   The separator is required to use --get or --old flags
2017/10/16 15:11:29 e2e.go:58:   The -- flag separator also suppresses this message
2017/10/16 15:11:29 e2e.go:77: Calling kubetest --check-version-skew=false -v -test --test_args=--ginkgo.focus=vcp-performance...
2017/10/16 15:11:29 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/16 15:11:29 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 280.313212ms
2017/10/16 15:11:29 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17390+60c9e59ad2b417-dirty", GitCommit:"60c9e59ad2b4179a4b6e89343cfeb9eb73a9d6b7", GitTreeState:"dirty", BuildDate:"2017-10-13T18:35:56Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1181+77b83e446b4e65", GitCommit:"77b83e446b4e655a71c315ad3f3890dc2a220ccf", GitTreeState:"clean", BuildDate:"2017-10-16T07:07:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
2017/10/16 15:11:30 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 156.135002ms
2017/10/16 15:11:30 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance
Conformance test: not doing test setup.
Oct 16 15:11:30.867: INFO: Overriding default scale value of zero to 1
Oct 16 15:11:30.867: INFO: Overriding default milliseconds value of zero to 5000
I1016 15:11:30.981146    6068 e2e.go:383] Starting e2e run "f687717b-b2be-11e7-b207-784f435ee632" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508191890 - Will randomize all specs
Will run 1 of 706 specs

Oct 16 15:11:31.007: INFO: >>> kubeConfig: /tmp/kube199.json
Oct 16 15:11:31.018: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 16 15:11:31.061: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 16 15:11:31.155: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 16 15:11:31.155: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 16 15:11:31.163: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 16 15:11:31.163: INFO: Dumping network health container logs from all nodes...
Oct 16 15:11:31.177: INFO: Client version: v1.6.0-alpha.0.17391+4a39b17440feee-dirty
Oct 16 15:11:31.181: INFO: Server version: v1.9.0-alpha.1.1181+77b83e446b4e65
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] vcp-performance
  vcp performance tests
  /Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99
[BeforeEach] [sig-storage] vcp-performance
  /Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 16 15:11:31.183: INFO: >>> kubeConfig: /tmp/kube199.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] vcp-performance
  /Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:68
[It] vcp performance tests
  /Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99
STEP: Creating Storage Class : sc-default
STEP: Creating Storage Class : sc-vsan
STEP: Creating Storage Class : sc-spbm
STEP: Creating Storage Class : sc-user-specified-ds
STEP: Creating 12 PVCs
Oct 16 15:11:31.708: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-5rrtp to have phase Bound
Oct 16 15:11:31.718: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:33.730: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:35.737: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:37.747: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:39.753: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:41.763: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:43.774: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:45.814: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:47.839: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:49.852: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:51.869: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:53.877: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:55.888: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:57.896: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:59.904: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:01.916: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:03.941: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:05.947: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:07.957: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:09.985: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:12.002: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:14.009: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:16.017: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:18.026: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:20.034: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:22.096: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:24.116: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:26.124: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:28.134: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:30.147: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:32.153: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:34.162: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:36.177: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:38.185: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:40.193: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:42.203: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:44.210: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:46.217: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:48.227: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:50.236: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:52.242: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:54.258: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:56.268: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:58.290: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:00.304: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:02.321: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:04.330: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:06.338: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:08.345: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:10.351: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:12.367: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:14.384: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:16.394: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:18.410: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:20.421: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:22.430: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:24.439: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:26.448: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:28.465: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:30.473: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:32.482: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:34.490: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:36.500: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:38.510: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:40.517: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:42.527: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
^C2017/10/16 15:13:43 util.go:176: Killing ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance(-5981) after receiving signal
2017/10/16 15:13:43 util.go:176: Killing ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance(-5981) after receiving signal
2017/10/16 15:13:43 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance' finished in 2m13.976765704s
2017/10/16 15:13:43 main.go:260: Something went wrong: encountered 1 errors: [error during ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance: signal: killed]
2017/10/16 15:13:43 e2e.go:79: err: exit status 1
exit status 1
pshahzeb-m01:kubernetes_2 pshahzeb$
pshahzeb-m01:kubernetes_2 pshahzeb$
pshahzeb-m01:kubernetes_2 pshahzeb$ make
+++ [1016 15:14:25] Building the toolchain targets:
    k8s.io/kubernetes/hack/cmd/teststale
    k8s.io/kubernetes/vendor/github.com/jteeuwen/go-bindata/go-bindata
+++ [1016 15:14:25] Generating bindata:
    test/e2e/generated/gobindata_util.go
~/k8s/kubernetes_2 ~/k8s/kubernetes_2/test/e2e/generated
~/k8s/kubernetes_2/test/e2e/generated
+++ [1016 15:14:26] Building go targets for darwin/amd64:
    cmd/kube-proxy
    cmd/kube-apiserver
    cmd/kube-controller-manager
    cmd/cloud-controller-manager
    cmd/kubelet
    cmd/kubeadm
    cmd/hyperkube
    vendor/k8s.io/kube-aggregator
    vendor/k8s.io/apiextensions-apiserver
    plugin/cmd/kube-scheduler
    cmd/kubectl
    federation/cmd/kubefed
    cmd/gendocs
    cmd/genkubedocs
    cmd/genman
    cmd/genyaml
    cmd/genswaggertypedocs
    cmd/linkcheck
    federation/cmd/genfeddocs
    vendor/github.com/onsi/ginkgo/ginkgo
    test/e2e/e2e.test
    cmd/kubemark
    vendor/github.com/onsi/ginkgo/ginkgo
    cmd/gke-certificates-controller
pshahzeb-m01:kubernetes_2 pshahzeb$ go run hack/e2e.go --check-version-skew=false -v -test --test_args='--ginkgo.focus=vcp-performance'
flag provided but not defined: -check-version-skew
Usage of /var/folders/97/lnlv1n317xl2ty8hdn7zptxr00b37m/T/go-build763038738/command-line-arguments/_obj/exe/e2e:
  -get
    	go get -u kubetest if old or not installed (default true)
  -old duration
    	Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/16 15:16:03 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/16 15:16:03 e2e.go:56:   Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/16 15:16:03 e2e.go:57:   The separator is required to use --get or --old flags
2017/10/16 15:16:03 e2e.go:58:   The -- flag separator also suppresses this message
2017/10/16 15:16:03 e2e.go:77: Calling kubetest --check-version-skew=false -v -test --test_args=--ginkgo.focus=vcp-performance...
2017/10/16 15:16:03 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/16 15:16:03 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 163.149145ms
2017/10/16 15:16:03 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17390+60c9e59ad2b417-dirty", GitCommit:"60c9e59ad2b4179a4b6e89343cfeb9eb73a9d6b7", GitTreeState:"dirty", BuildDate:"2017-10-13T18:35:56Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1181+77b83e446b4e65", GitCommit:"77b83e446b4e655a71c315ad3f3890dc2a220ccf", GitTreeState:"clean", BuildDate:"2017-10-16T07:07:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
2017/10/16 15:16:03 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 168.158343ms
2017/10/16 15:16:03 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance
Conformance test: not doing test setup.
Oct 16 15:16:04.325: INFO: Overriding default scale value of zero to 1
Oct 16 15:16:04.325: INFO: Overriding default milliseconds value of zero to 5000
I1016 15:16:04.425919    8714 e2e.go:383] Starting e2e run "9984ec93-b2bf-11e7-810d-784f435ee632" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508192163 - Will randomize all specs
Will run 1 of 706 specs

Oct 16 15:16:04.443: INFO: >>> kubeConfig: /tmp/kube199.json
Oct 16 15:16:04.453: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 16 15:16:04.500: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 16 15:16:04.598: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 16 15:16:04.598: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 16 15:16:04.607: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 16 15:16:04.607: INFO: Dumping network health container logs from all nodes...
Oct 16 15:16:04.626: INFO: Client version: v1.6.0-alpha.0.17391+4a39b17440feee-dirty
Oct 16 15:16:04.631: INFO: Server version: v1.9.0-alpha.1.1181+77b83e446b4e65
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] vcp-performance
  vcp performance tests
  /Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99
[BeforeEach] [sig-storage] vcp-performance
  /Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 16 15:16:04.632: INFO: >>> kubeConfig: /tmp/kube199.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] vcp-performance
  /Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:68
[It] vcp performance tests
  /Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99
STEP: Creating Storage Class : sc-default
STEP: Creating Storage Class : sc-vsan
STEP: Creating Storage Class : sc-spbm
STEP: Creating Storage Class : sc-user-specified-ds
STEP: Creating 12 PVCs
Oct 16 15:16:05.313: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-l9tg4 to have phase Bound
Oct 16 15:16:05.359: INFO: PersistentVolumeClaim pvc-l9tg4 found but phase is Pending instead of Bound.
Oct 16 15:16:07.381: INFO: PersistentVolumeClaim pvc-l9tg4 found but phase is Pending instead of Bound.
Oct 16 15:16:09.389: INFO: PersistentVolumeClaim pvc-l9tg4 found but phase is Pending instead of Bound.
Oct 16 15:16:11.404: INFO: PersistentVolumeClaim pvc-l9tg4 found and phase=Bound (6.090428509s)
Oct 16 15:16:11.462: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-j9m85 to have phase Bound
Oct 16 15:16:11.476: INFO: PersistentVolumeClaim pvc-j9m85 found but phase is Pending instead of Bound.
Oct 16 15:16:13.489: INFO: PersistentVolumeClaim pvc-j9m85 found but phase is Pending instead of Bound.
Oct 16 15:16:15.502: INFO: PersistentVolumeClaim pvc-j9m85 found but phase is Pending instead of Bound.
Oct 16 15:16:17.509: INFO: PersistentVolumeClaim pvc-j9m85 found and phase=Bound (6.046381507s)
Oct 16 15:16:17.543: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-mc77p to have phase Bound
Oct 16 15:16:17.558: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:19.592: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:21.598: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:23.609: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:25.618: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:27.655: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:29.699: INFO: PersistentVolumeClaim pvc-mc77p found and phase=Bound (12.155659079s)
Oct 16 15:16:29.801: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-2j86v to have phase Bound
Oct 16 15:16:29.815: INFO: PersistentVolumeClaim pvc-2j86v found and phase=Bound (14.767532ms)
Oct 16 15:16:29.847: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-q7rsq to have phase Bound
Oct 16 15:16:29.882: INFO: PersistentVolumeClaim pvc-q7rsq found but phase is Pending instead of Bound.
Oct 16 15:16:31.896: INFO: PersistentVolumeClaim pvc-q7rsq found and phase=Bound (2.048751822s)
Oct 16 15:16:31.928: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-qsh8l to have phase Bound
Oct 16 15:16:31.943: INFO: PersistentVolumeClaim pvc-qsh8l found and phase=Bound (14.944175ms)
Oct 16 15:16:31.975: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-52pcj to have phase Bound
Oct 16 15:16:31.993: INFO: PersistentVolumeClaim pvc-52pcj found and phase=Bound (17.704673ms)
Oct 16 15:16:32.021: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-v5x89 to have phase Bound
Oct 16 15:16:32.043: INFO: PersistentVolumeClaim pvc-v5x89 found and phase=Bound (21.44398ms)
Oct 16 15:16:32.073: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-f9pnm to have phase Bound
Oct 16 15:16:32.096: INFO: PersistentVolumeClaim pvc-f9pnm found but phase is Pending instead of Bound.
Oct 16 15:16:34.163: INFO: PersistentVolumeClaim pvc-f9pnm found but phase is Pending instead of Bound.
Oct 16 15:16:36.174: INFO: PersistentVolumeClaim pvc-f9pnm found and phase=Bound (4.100911147s)
Oct 16 15:16:36.224: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-m5fqt to have phase Bound
Oct 16 15:16:36.239: INFO: PersistentVolumeClaim pvc-m5fqt found and phase=Bound (14.819033ms)
Oct 16 15:16:36.284: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-mbsvx to have phase Bound
Oct 16 15:16:36.302: INFO: PersistentVolumeClaim pvc-mbsvx found and phase=Bound (18.02845ms)
Oct 16 15:16:36.334: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-s4sr2 to have phase Bound
Oct 16 15:16:36.352: INFO: PersistentVolumeClaim pvc-s4sr2 found and phase=Bound (17.921955ms)
STEP: Creating pod to attach PVs to the node
Oct 16 15:17:57.069: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-hrfpv --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:17:57.397: INFO: stderr: ""
Oct 16 15:17:57.397: INFO: stdout: ""
Oct 16 15:17:57.527: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-hrfpv --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:17:57.836: INFO: stderr: ""
Oct 16 15:17:57.836: INFO: stdout: ""
Oct 16 15:17:57.981: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-hrfpv --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:17:58.290: INFO: stderr: ""
Oct 16 15:17:58.290: INFO: stdout: ""
Oct 16 15:17:58.421: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vkgvj --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:17:58.755: INFO: stderr: ""
Oct 16 15:17:58.755: INFO: stdout: ""
Oct 16 15:17:58.884: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vkgvj --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:17:59.188: INFO: stderr: ""
Oct 16 15:17:59.188: INFO: stdout: ""
Oct 16 15:17:59.287: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vkgvj --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:17:59.602: INFO: stderr: ""
Oct 16 15:17:59.602: INFO: stdout: ""
Oct 16 15:17:59.721: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-wvnrg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:18:00.101: INFO: stderr: ""
Oct 16 15:18:00.101: INFO: stdout: ""
Oct 16 15:18:00.265: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-wvnrg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:18:00.611: INFO: stderr: ""
Oct 16 15:18:00.611: INFO: stdout: ""
Oct 16 15:18:00.720: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-wvnrg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:18:01.092: INFO: stderr: ""
Oct 16 15:18:01.092: INFO: stdout: ""
Oct 16 15:18:01.212: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vdb6s --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:18:01.589: INFO: stderr: ""
Oct 16 15:18:01.589: INFO: stdout: ""
Oct 16 15:18:01.694: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vdb6s --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:18:02.023: INFO: stderr: ""
Oct 16 15:18:02.023: INFO: stdout: ""
Oct 16 15:18:02.502: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vdb6s --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:18:02.805: INFO: stderr: ""
Oct 16 15:18:02.805: INFO: stdout: ""
STEP: Deleting pods
Oct 16 15:18:02.807: INFO: Deleting pod "pvc-tester-hrfpv" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:18:02.842: INFO: Wait up to 5m0s for pod "pvc-tester-hrfpv" to be fully deleted
Oct 16 15:18:42.875: INFO: Deleting pod "pvc-tester-vkgvj" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:18:42.913: INFO: Wait up to 5m0s for pod "pvc-tester-vkgvj" to be fully deleted
Oct 16 15:19:24.937: INFO: Deleting pod "pvc-tester-wvnrg" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:19:24.971: INFO: Wait up to 5m0s for pod "pvc-tester-wvnrg" to be fully deleted
Oct 16 15:19:56.990: INFO: Deleting pod "pvc-tester-vdb6s" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:19:57.025: INFO: Wait up to 5m0s for pod "pvc-tester-vdb6s" to be fully deleted
Oct 16 15:20:41.866: INFO: Volume are successfully detached from all the nodes: map[kubernetes-node4:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a1d277f-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a21e539-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a287a26-b2bf-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node1:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-99f9f244-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-99fe7a20-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-99fff232-b2bf-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node2:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a033865-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a0813e3-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a0a963e-b2bf-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node3:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a0f575d-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a12e997-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a17cfa2-b2bf-11e7-aeb5-0050569c38f9.vmdk]]
STEP: Deleting the PVCs
Oct 16 15:20:41.872: INFO: Deleting PersistentVolumeClaim "pvc-l9tg4"
Oct 16 15:20:41.919: INFO: Deleting PersistentVolumeClaim "pvc-j9m85"
Oct 16 15:20:41.975: INFO: Deleting PersistentVolumeClaim "pvc-mc77p"
Oct 16 15:20:42.027: INFO: Deleting PersistentVolumeClaim "pvc-2j86v"
Oct 16 15:20:42.082: INFO: Deleting PersistentVolumeClaim "pvc-q7rsq"
Oct 16 15:20:42.147: INFO: Deleting PersistentVolumeClaim "pvc-qsh8l"
Oct 16 15:20:42.224: INFO: Deleting PersistentVolumeClaim "pvc-52pcj"
Oct 16 15:20:42.259: INFO: Deleting PersistentVolumeClaim "pvc-v5x89"
Oct 16 15:20:42.316: INFO: Deleting PersistentVolumeClaim "pvc-f9pnm"
Oct 16 15:20:42.369: INFO: Deleting PersistentVolumeClaim "pvc-m5fqt"
Oct 16 15:20:42.409: INFO: Deleting PersistentVolumeClaim "pvc-mbsvx"
Oct 16 15:20:42.448: INFO: Deleting PersistentVolumeClaim "pvc-s4sr2"
STEP: Creating 12 PVCs
Oct 16 15:20:42.807: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-85px8 to have phase Bound
Oct 16 15:20:42.832: INFO: PersistentVolumeClaim pvc-85px8 found but phase is Pending instead of Bound.
Oct 16 15:20:44.845: INFO: PersistentVolumeClaim pvc-85px8 found but phase is Pending instead of Bound.
Oct 16 15:20:46.943: INFO: PersistentVolumeClaim pvc-85px8 found and phase=Bound (4.13527333s)
Oct 16 15:20:47.032: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-npbn8 to have phase Bound
Oct 16 15:20:47.048: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:49.086: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:51.097: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:53.108: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:55.128: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:57.148: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:59.160: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:01.172: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:03.185: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:05.194: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:07.223: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:09.239: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:11.261: INFO: PersistentVolumeClaim pvc-npbn8 found and phase=Bound (24.228554172s)
Oct 16 15:21:11.285: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-ts6b8 to have phase Bound
Oct 16 15:21:11.298: INFO: PersistentVolumeClaim pvc-ts6b8 found and phase=Bound (12.795195ms)
Oct 16 15:21:11.325: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-hqb5d to have phase Bound
Oct 16 15:21:11.336: INFO: PersistentVolumeClaim pvc-hqb5d found and phase=Bound (11.085933ms)
Oct 16 15:21:11.359: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-pzlmw to have phase Bound
Oct 16 15:21:11.374: INFO: PersistentVolumeClaim pvc-pzlmw found and phase=Bound (14.757981ms)
Oct 16 15:21:11.400: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-4mljw to have phase Bound
Oct 16 15:21:11.426: INFO: PersistentVolumeClaim pvc-4mljw found and phase=Bound (25.6641ms)
Oct 16 15:21:11.450: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-mz5br to have phase Bound
Oct 16 15:21:11.462: INFO: PersistentVolumeClaim pvc-mz5br found and phase=Bound (11.515099ms)
Oct 16 15:21:11.492: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-7fk8x to have phase Bound
Oct 16 15:21:11.505: INFO: PersistentVolumeClaim pvc-7fk8x found and phase=Bound (13.387584ms)
Oct 16 15:21:11.530: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-cb2dp to have phase Bound
Oct 16 15:21:11.550: INFO: PersistentVolumeClaim pvc-cb2dp found and phase=Bound (19.152805ms)
Oct 16 15:21:11.584: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-85sqf to have phase Bound
Oct 16 15:21:11.599: INFO: PersistentVolumeClaim pvc-85sqf found and phase=Bound (14.406407ms)
Oct 16 15:21:11.632: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-8zdmg to have phase Bound
Oct 16 15:21:11.651: INFO: PersistentVolumeClaim pvc-8zdmg found and phase=Bound (18.063182ms)
Oct 16 15:21:11.683: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-nntqr to have phase Bound
Oct 16 15:21:11.694: INFO: PersistentVolumeClaim pvc-nntqr found and phase=Bound (10.97945ms)
STEP: Creating pod to attach PVs to the node
Oct 16 15:23:16.187: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-dpsht --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:23:16.646: INFO: stderr: ""
Oct 16 15:23:16.646: INFO: stdout: ""
Oct 16 15:23:16.755: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-dpsht --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:23:17.090: INFO: stderr: ""
Oct 16 15:23:17.090: INFO: stdout: ""
Oct 16 15:23:17.184: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-dpsht --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:23:17.509: INFO: stderr: ""
Oct 16 15:23:17.510: INFO: stdout: ""
Oct 16 15:23:17.606: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-kt8wp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:23:17.910: INFO: stderr: ""
Oct 16 15:23:17.910: INFO: stdout: ""
Oct 16 15:23:18.007: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-kt8wp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:23:18.324: INFO: stderr: ""
Oct 16 15:23:18.324: INFO: stdout: ""
Oct 16 15:23:18.417: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-kt8wp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:23:18.718: INFO: stderr: ""
Oct 16 15:23:18.719: INFO: stdout: ""
Oct 16 15:23:18.818: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-lckz2 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:23:19.137: INFO: stderr: ""
Oct 16 15:23:19.137: INFO: stdout: ""
Oct 16 15:23:19.244: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-lckz2 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:23:19.556: INFO: stderr: ""
Oct 16 15:23:19.556: INFO: stdout: ""
Oct 16 15:23:19.638: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-lckz2 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:23:19.961: INFO: stderr: ""
Oct 16 15:23:19.961: INFO: stdout: ""
Oct 16 15:23:20.060: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vrjxc --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:23:20.365: INFO: stderr: ""
Oct 16 15:23:20.365: INFO: stdout: ""
Oct 16 15:23:20.464: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vrjxc --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:23:20.837: INFO: stderr: ""
Oct 16 15:23:20.838: INFO: stdout: ""
Oct 16 15:23:20.948: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vrjxc --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:23:21.258: INFO: stderr: ""
Oct 16 15:23:21.258: INFO: stdout: ""
STEP: Deleting pods
Oct 16 15:23:21.258: INFO: Deleting pod "pvc-tester-dpsht" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:23:21.299: INFO: Wait up to 5m0s for pod "pvc-tester-dpsht" to be fully deleted
Oct 16 15:24:03.361: INFO: Deleting pod "pvc-tester-kt8wp" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:24:03.397: INFO: Wait up to 5m0s for pod "pvc-tester-kt8wp" to be fully deleted
Oct 16 15:24:45.415: INFO: Deleting pod "pvc-tester-lckz2" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:24:45.452: INFO: Wait up to 5m0s for pod "pvc-tester-lckz2" to be fully deleted
Oct 16 15:25:23.476: INFO: Deleting pod "pvc-tester-vrjxc" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:25:23.510: INFO: Wait up to 5m0s for pod "pvc-tester-vrjxc" to be fully deleted
Oct 16 15:26:07.784: INFO: Volume are successfully detached from all the nodes: map[kubernetes-node3:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f7e96b8-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f825cec-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f8627c5-b2c0-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node4:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f89ca32-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f8cd95e-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f900995-b2c0-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node1:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f6a76ec-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f6d2d17-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f6f2a1a-b2c0-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node2:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f72bfae-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f760aab-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f791671-b2c0-11e7-aeb5-0050569c38f9.vmdk]]
STEP: Deleting the PVCs
Oct 16 15:26:07.784: INFO: Deleting PersistentVolumeClaim "pvc-85px8"
Oct 16 15:26:07.854: INFO: Deleting PersistentVolumeClaim "pvc-npbn8"
Oct 16 15:26:07.900: INFO: Deleting PersistentVolumeClaim "pvc-ts6b8"
Oct 16 15:26:07.954: INFO: Deleting PersistentVolumeClaim "pvc-hqb5d"
Oct 16 15:26:08.003: INFO: Deleting PersistentVolumeClaim "pvc-pzlmw"
Oct 16 15:26:08.044: INFO: Deleting PersistentVolumeClaim "pvc-4mljw"
Oct 16 15:26:08.090: INFO: Deleting PersistentVolumeClaim "pvc-mz5br"
Oct 16 15:26:08.130: INFO: Deleting PersistentVolumeClaim "pvc-7fk8x"
Oct 16 15:26:08.183: INFO: Deleting PersistentVolumeClaim "pvc-cb2dp"
Oct 16 15:26:08.230: INFO: Deleting PersistentVolumeClaim "pvc-85sqf"
Oct 16 15:26:08.282: INFO: Deleting PersistentVolumeClaim "pvc-8zdmg"
Oct 16 15:26:08.337: INFO: Deleting PersistentVolumeClaim "pvc-nntqr"
STEP: Creating 12 PVCs
Oct 16 15:26:08.691: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-jwmql to have phase Bound
Oct 16 15:26:08.716: INFO: PersistentVolumeClaim pvc-jwmql found but phase is Pending instead of Bound.
Oct 16 15:26:10.732: INFO: PersistentVolumeClaim pvc-jwmql found but phase is Pending instead of Bound.
Oct 16 15:26:12.754: INFO: PersistentVolumeClaim pvc-jwmql found and phase=Bound (4.062803231s)
Oct 16 15:26:12.789: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-jhrg7 to have phase Bound
Oct 16 15:26:12.801: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:14.817: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:16.834: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:18.854: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:20.871: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:22.888: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:24.901: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:26.918: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:28.929: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:30.941: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:32.958: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:34.976: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:37.013: INFO: PersistentVolumeClaim pvc-jhrg7 found and phase=Bound (24.222741938s)
Oct 16 15:26:37.042: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-lvvkl to have phase Bound
Oct 16 15:26:37.055: INFO: PersistentVolumeClaim pvc-lvvkl found and phase=Bound (12.935683ms)
Oct 16 15:26:37.078: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-bgkkc to have phase Bound
Oct 16 15:26:37.088: INFO: PersistentVolumeClaim pvc-bgkkc found and phase=Bound (9.861689ms)
Oct 16 15:26:37.109: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-qt2lv to have phase Bound
Oct 16 15:26:37.126: INFO: PersistentVolumeClaim pvc-qt2lv found and phase=Bound (17.393667ms)
Oct 16 15:26:37.147: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-pgs9s to have phase Bound
Oct 16 15:26:37.158: INFO: PersistentVolumeClaim pvc-pgs9s found but phase is Pending instead of Bound.
Oct 16 15:26:39.171: INFO: PersistentVolumeClaim pvc-pgs9s found and phase=Bound (2.023756794s)
Oct 16 15:26:39.217: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-8h942 to have phase Bound
Oct 16 15:26:39.249: INFO: PersistentVolumeClaim pvc-8h942 found and phase=Bound (32.347782ms)
Oct 16 15:26:39.282: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-phtvg to have phase Bound
Oct 16 15:26:39.296: INFO: PersistentVolumeClaim pvc-phtvg found and phase=Bound (13.940285ms)
Oct 16 15:26:39.321: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-ldv2f to have phase Bound
Oct 16 15:26:39.333: INFO: PersistentVolumeClaim pvc-ldv2f found and phase=Bound (11.888903ms)
Oct 16 15:26:39.360: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-4v9hf to have phase Bound
Oct 16 15:26:39.375: INFO: PersistentVolumeClaim pvc-4v9hf found and phase=Bound (14.230796ms)
Oct 16 15:26:39.403: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-jkfg5 to have phase Bound
Oct 16 15:26:39.419: INFO: PersistentVolumeClaim pvc-jkfg5 found and phase=Bound (15.47811ms)
Oct 16 15:26:39.449: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-87dwp to have phase Bound
Oct 16 15:26:39.463: INFO: PersistentVolumeClaim pvc-87dwp found and phase=Bound (13.680898ms)
STEP: Creating pod to attach PVs to the node
Oct 16 15:28:08.033: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-n68rp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:28:08.507: INFO: stderr: ""
Oct 16 15:28:08.507: INFO: stdout: ""
Oct 16 15:28:08.609: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-n68rp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:28:08.917: INFO: stderr: ""
Oct 16 15:28:08.917: INFO: stdout: ""
Oct 16 15:28:09.019: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-n68rp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:28:09.342: INFO: stderr: ""
Oct 16 15:28:09.342: INFO: stdout: ""
Oct 16 15:28:09.432: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-qm7w8 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:28:09.760: INFO: stderr: ""
Oct 16 15:28:09.760: INFO: stdout: ""
Oct 16 15:28:09.847: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-qm7w8 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:28:10.164: INFO: stderr: ""
Oct 16 15:28:10.164: INFO: stdout: ""
Oct 16 15:28:10.259: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-qm7w8 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:28:10.576: INFO: stderr: ""
Oct 16 15:28:10.576: INFO: stdout: ""
Oct 16 15:28:10.681: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-jslwg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:28:11.000: INFO: stderr: ""
Oct 16 15:28:11.000: INFO: stdout: ""
Oct 16 15:28:11.086: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-jslwg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:28:11.383: INFO: stderr: ""
Oct 16 15:28:11.383: INFO: stdout: ""
Oct 16 15:28:11.486: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-jslwg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:28:11.782: INFO: stderr: ""
Oct 16 15:28:11.782: INFO: stdout: ""
Oct 16 15:28:11.888: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-mcqqq --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:28:12.207: INFO: stderr: ""
Oct 16 15:28:12.207: INFO: stdout: ""
Oct 16 15:28:12.315: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-mcqqq --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:28:12.634: INFO: stderr: ""
Oct 16 15:28:12.634: INFO: stdout: ""
Oct 16 15:28:12.778: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-mcqqq --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:28:13.113: INFO: stderr: ""
Oct 16 15:28:13.113: INFO: stdout: ""
STEP: Deleting pods
Oct 16 15:28:13.113: INFO: Deleting pod "pvc-tester-n68rp" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:28:13.157: INFO: Wait up to 5m0s for pod "pvc-tester-n68rp" to be fully deleted
Oct 16 15:28:53.195: INFO: Deleting pod "pvc-tester-qm7w8" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:28:53.224: INFO: Wait up to 5m0s for pod "pvc-tester-qm7w8" to be fully deleted
Oct 16 15:29:35.246: INFO: Deleting pod "pvc-tester-jslwg" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:29:35.279: INFO: Wait up to 5m0s for pod "pvc-tester-jslwg" to be fully deleted
Oct 16 15:30:07.312: INFO: Deleting pod "pvc-tester-mcqqq" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:30:07.357: INFO: Wait up to 5m0s for pod "pvc-tester-mcqqq" to be fully deleted
Oct 16 15:31:03.595: INFO: Volume are successfully detached from all the nodes: map[kubernetes-node1:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01aaa147-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01ae1953-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01b03dec-b2c1-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node2:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01b2ea3b-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01b76412-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01b8de3d-b2c1-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node3:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01bd6a83-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01c1b249-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01c53dd9-b2c1-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node4:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01c941ba-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01caec5e-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01ce2be9-b2c1-11e7-aeb5-0050569c38f9.vmdk]]
STEP: Deleting the PVCs
Oct 16 15:31:03.595: INFO: Deleting PersistentVolumeClaim "pvc-jwmql"
Oct 16 15:31:03.641: INFO: Deleting PersistentVolumeClaim "pvc-jhrg7"
Oct 16 15:31:03.681: INFO: Deleting PersistentVolumeClaim "pvc-lvvkl"
Oct 16 15:31:03.724: INFO: Deleting PersistentVolumeClaim "pvc-bgkkc"
Oct 16 15:31:03.771: INFO: Deleting PersistentVolumeClaim "pvc-qt2lv"
Oct 16 15:31:03.833: INFO: Deleting PersistentVolumeClaim "pvc-pgs9s"
Oct 16 15:31:03.887: INFO: Deleting PersistentVolumeClaim "pvc-8h942"
Oct 16 15:31:04.047: INFO: Deleting PersistentVolumeClaim "pvc-phtvg"
Oct 16 15:31:04.089: INFO: Deleting PersistentVolumeClaim "pvc-ldv2f"
Oct 16 15:31:04.153: INFO: Deleting PersistentVolumeClaim "pvc-4v9hf"
Oct 16 15:31:04.211: INFO: Deleting PersistentVolumeClaim "pvc-jkfg5"
Oct 16 15:31:04.263: INFO: Deleting PersistentVolumeClaim "pvc-87dwp"
Oct 16 15:31:04.317: INFO: Average latency for below operations
Oct 16 15:31:04.317: INFO: Creating 12 PVCs and waiting for bound phase: 30576919 microseconds
Oct 16 15:31:04.317: INFO: Creating 4 Pod: 97668230 microseconds
Oct 16 15:31:04.317: INFO: Deleting 4 Pod and waiting for disk to be detached: 154930158 microseconds
Oct 16 15:31:04.317: INFO: Deleting 12 PVCs: 660074 microseconds
[AfterEach] [sig-storage] vcp-performance
  /Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 16 15:31:04.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-vcp-performance-lfrbk" for this suite.
Oct 16 15:31:19.156: INFO: namespace: e2e-tests-vcp-performance-lfrbk, resource: bindings, ignored listing per whitelist
Oct 16 15:31:19.297: INFO: namespace e2e-tests-vcp-performance-lfrbk deletion completed in 14.690943637s

• [SLOW TEST:914.654 seconds]
[sig-storage] vcp-performance
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
  vcp performance tests
  /Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 16 15:31:19.305: INFO: Running AfterSuite actions on all node
Oct 16 15:31:19.305: INFO: Running AfterSuite actions on node 1

Ran 1 of 706 Specs in 914.851 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 705 Skipped PASS

Ginkgo ran 1 suite in 15m15.380170791s
Test Suite Passed
2017/10/16 15:31:19 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance' finished in 15m15.901302911s
2017/10/16 15:31:19 e2e.go:81: Done
```
</details>

```
None
```
2017-11-10 01:30:21 -08:00
Krzysztof Jastrzebski 20e5b896e9 Adds e2e tests for Pod Priority and Preemption in Clucter Autoscaler:
- shouldn't scale up when expendable pod is created
 - should scale up when non expendable pod is created
 - shouldn't scale up when expendable pod is preempted
 - should scale down when expendable pod is running
 - shouldn't scale down when non expendable pod is running
2017-11-10 10:07:18 +01:00
Marian Lobur ba313796f1 Fix influxdb e2e test failure.
In scalability testing influxdb was recently disabled, but we still
trying to execute corresponidng test, as a result it fails all the time.
Skip test if influxdb is disabled.
2017-11-10 09:16:45 +01:00
chenxingyu 954c97fe6d add e2e test on the hostport predicates 2017-11-10 15:44:23 +08:00
Michelle Au 36d16e0dbd Add sig storage label to multizone static PV test 2017-11-09 16:20:03 -08:00
mbohlool fc5a613c17 Add MutatingWebhookConfiguration type 2017-11-09 14:00:14 -08:00
Shyam Jeedigunta 913721ebee Disable service e2e on type and port change for huge clusters 2017-11-09 20:42:13 +01:00
mbohlool 9ddea83a2c Rename ExternalAdmissionHookConfiguration to ValidatingWebhookConfiguration 2017-11-09 11:39:50 -08:00
Shahzeb US1079625 8c77b6c931 E2E Performance test to print latency numbers for vsphere volume lifecycle operations 2017-11-09 11:29:12 -08:00
Jordan Liggitt f9e2e406ba
Enable scale subresources for apps/v1 2017-11-09 13:42:15 -05:00
Marcin Owsiany 96089e0b79 Add performance test phase timing export. 2017-11-09 17:12:15 +01:00
Dr. Stefan Schimanski bec617f3cc Update generated files 2017-11-09 12:14:08 +01:00
Dr. Stefan Schimanski 012b085ac8 pkg/apis/core: mechanical import fixes in dependencies 2017-11-09 12:14:08 +01:00
Marcin Owsiany 115bd09ff9 Fix typo and progress messages. 2017-11-09 07:54:25 +01:00
Kubernetes Submit Queue 3e315aa0f8
Merge pull request #49429 from enisoc/dedup-rc-rs
Automatic merge from submit-queue (batch tested with PRs 54773, 52523, 47497, 55356, 49429). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Deduplicate RC/RS controller code.

The code was already 99% similar between RC and RS. This is a wild idea to try to deduplicate the two controllers in a type-safe manner without adding tons of boilerplate, and without using code generation.

They are still separate resources and separate worker pools. This is a refactor that isn't intended to change any behavior.

```release-note
ReplicationController now shares its underlying controller implementation with ReplicaSet to reduce the maintenance burden going forward. However, they are still separate resources and there should be no externally visible effects from this change.
```

ref #49429
2017-11-08 22:12:03 -08:00
Kubernetes Submit Queue f7dc3966a4
Merge pull request #47497 from mikedanese/binary
Automatic merge from submit-queue (batch tested with PRs 54773, 52523, 47497, 55356, 49429). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

don't check in mounter binary

```release-note
GCI mounter is moved from the manifests tarball to the server tarball.
```
2017-11-08 22:11:53 -08:00
Kubernetes Submit Queue b616dff2e6
Merge pull request #52523 from NickrenREN/ephemeral-storage-e2e
Automatic merge from submit-queue (batch tested with PRs 54773, 52523, 47497, 55356, 49429). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add ephemeral storage e2e tests

Add e2e tests of limitrange/quota/downward_api for local ephemeral storage

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: part of #52463

**Special notes for your reviewer**:
Add e2e tests of limitrange/quota/downwardapi for local ephemeral storage

**Release note**:
```release-note
Add limitrange/resourcequota/downward_api  e2e tests for local ephemeral storage
```

/assign @jingxu97
2017-11-08 22:11:49 -08:00
Kubernetes Submit Queue a8b355cf92
Merge pull request #55352 from shyamjvs/remove-cluster-ip-range-hack
Automatic merge from submit-queue (batch tested with PRs 55092, 55348, 55095, 55277, 55352). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Remove hack for CLUSTER_IP_RANGE in e2e framework no longer needed

As discussed in https://github.com/kubernetes/test-infra/pull/5386#discussion_r149523537, we no longer need it as we're using the flag to pass the value.

/cc @krzyzacy
2017-11-08 21:18:30 -08:00
Kubernetes Submit Queue 1c9d6f53af
Merge pull request #54018 from vmware/vSphereScaleTest
Automatic merge from submit-queue (batch tested with PRs 55301, 55319, 54018, 55322, 55125). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

E2E scale test for vSphere Cloud Provider Volume lifecycle operations

This PR adds an E2E test for vSphere Cloud Provider which will create/attach/detach/detach the volumes at scale with multiple threads based on user configurable values for number of volumes, volumes per pod and number of threads. (Since this is a scale test, number of threads would be low. This is only used to speed up the operation)

Test performs following tasks.

1. Create Storage Classes of 4 Categories (Default, SC with Non Default Datastore, SC with SPBM Policy, SC with VSAN Storage Capalibilies.)
2. Read VCP_SCALE_VOLUME_COUNT from System Environment.
3. Launch VCP_SCALE_INSTANCES go routines for creating VCP_SCALE_VOLUME_COUNT volumes. Each go routine is responsible for create/attach of VCP_SCALE_VOLUME_COUNT/VCP_SCALE_INSTANCES volumes.
4. Read VCP_SCALE_VOLUMES_PER_POD from System Environment. Each pod will be have VCP_SCALE_VOLUMES_PER_POD attached to it.
5. Once all the go routines are completed, we delete all the pods and volumes.

Which issue this PR fixes
fixes # vmware#291

```release-note
None
```
2017-11-08 20:23:28 -08:00
Kubernetes Submit Queue 6257541461
Merge pull request #55319 from shyamjvs/add-shyamjvs-to-test-owners
Automatic merge from submit-queue (batch tested with PRs 55301, 55319, 54018, 55322, 55125). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add shyamjvs to test/OWNERS

I've been reviewing quite some PRs recently and have reviewed many in the past. + I have >80 commits in this code path (git log test | grep "shyamjvs@google.com") touching various parts including e2e/framework, utils, perftype, kubemark, e2e fixes from other SIGs (mostly in regard of scalability).

/cc @gmarek @spiffxp @krzyzacy @kubernetes/sig-testing-misc
2017-11-08 20:23:24 -08:00
Kubernetes Submit Queue 8fee4d1d9b
Merge pull request #53747 from vmware/bulkverify_test
Automatic merge from submit-queue (batch tested with PRs 53747, 54528, 55279, 55251, 55311). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Adding e2e test to verify volume attach status after master kubelet restart

**What this PR does / why we need it**:
This PR adds test to verify volume remains attached after the kubelet is restarted on master node.

**Which issue this PR fixes** : 
fixes vmware#274

**Special notes for your reviewer**:
This test does not run as part of existing sig-storage test grid. It has been tested internally at VMware.
Test logs
```
root@k8s-dev-vm-01:~/shahzeb/k8s/kubernetes# go run hack/e2e.go --check-version-skew=false -v -test --test_args='--ginkgo.focus=Volume\sAttach\sVerify'
flag provided but not defined: -check-version-skew
Usage of /tmp/go-build395888807/command-line-arguments/_obj/exe/e2e:
  -get
    	go get -u kubetest if old or not installed (default true)
  -old duration
    	Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/11 12:14:05 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/11 12:14:05 e2e.go:56:   Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/11 12:14:05 e2e.go:57:   The separator is required to use --get or --old flags
2017/10/11 12:14:05 e2e.go:58:   The -- flag separator also suppresses this message
2017/10/11 12:14:05 e2e.go:151: The kubetest binary is older than 24h0m0s.
2017/10/11 12:14:05 e2e.go:156: Updating kubetest binary...
2017/10/11 12:14:13 e2e.go:77: Calling kubetest --check-version-skew=false -v -test --test_args=--ginkgo.focus=Volume\sAttach\sVerify...
2017/10/11 12:14:13 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/11 12:14:13 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 493.364761ms
2017/10/11 12:14:13 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17307+d274c30f81d1c2", GitCommit:"d274c30f81d1c2d966dc950014ac90f8fad140f7", GitTreeState:"clean", BuildDate:"2017-10-11T18:57:31Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.5", GitCommit:"490c6f13df1cb6612e0993c4c14f2ff90f8cdbf3", GitTreeState:"clean", BuildDate:"2017-06-14T20:03:38Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
2017/10/11 12:14:14 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 352.041653ms
2017/10/11 12:14:14 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=Volume\sAttach\sVerify
Conformance test: not doing test setup.
Oct 11 12:14:15.478: INFO: Overriding default scale value of zero to 1
Oct 11 12:14:15.478: INFO: Overriding default milliseconds value of zero to 5000
I1011 12:14:15.692022   29999 e2e.go:383] Starting e2e run "5f33ad5b-aeb8-11e7-9f17-0050569c27f6" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1507749254 - Will randomize all specs
Will run 1 of 709 specs

Oct 11 12:14:15.744: INFO: >>> kubeConfig: /tmp/kube204.json
Oct 11 12:14:15.751: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 11 12:14:15.861: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 11 12:14:16.067: INFO: 4 / 4 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 11 12:14:16.067: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Oct 11 12:14:16.077: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 11 12:14:16.077: INFO: Dumping network health container logs from all nodes...
Oct 11 12:14:16.083: INFO: Client version: v1.6.0-alpha.0.17307+d274c30f81d1c2
Oct 11 12:14:16.086: INFO: Server version: v1.6.5
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Volume Attach Verify [Feature:vsphere]
  verify volume remains attached after master kubelet restart
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:144
[BeforeEach] [sig-storage] Volume Attach Verify [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 11 12:14:16.087: INFO: >>> kubeConfig: /tmp/kube204.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Attach Verify [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:81
Oct 11 12:14:16.265: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
[It] verify volume remains attached after master kubelet restart
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:144
STEP: Creating a test vsphere volume 0
STEP: Creating pod 0 on node kubernetes-node1
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk is attached to the pod kubernetes-node1
STEP: Creating a test vsphere volume 1
STEP: Creating pod 1 on node kubernetes-node2
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk is attached to the pod kubernetes-node2
STEP: Creating a test vsphere volume 2
STEP: Creating pod 2 on node kubernetes-node3
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk is attached to the pod kubernetes-node3
STEP: Creating a test vsphere volume 3
STEP: Creating pod 3 on node kubernetes-node4
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk is attached to the pod kubernetes-node4
STEP: Restarting kubelet on master node
Oct 11 12:16:12.239: INFO: Restarting kubelet via ssh on host 10.192.113.70:22 with command systemctl restart kubelet
STEP: Verifying the kubelet on master node is up
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: command:   curl http://localhost:10255/healthz
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: stdout:    ""
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: stderr:    "  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                 Dload  Upload   Total   Spent    Left  Speed\n\r  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (7) Failed to connect to localhost port 10255: Connection refused\n"
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: exit code: 7
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk is attached to the pod kubernetes-node1
STEP: Deleting pod on node kubernetes-node1
Oct 11 12:16:18.538: INFO: Deleting pod "vsphere-e2e-pwjr1" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:16:18.559: INFO: Wait up to 5m0s for pod "vsphere-e2e-pwjr1" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk to be detached from the node kubernetes-node1
Oct 11 12:17:10.686: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk" appears to have successfully detached from "kubernetes-node1".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk is attached to the pod kubernetes-node2
STEP: Deleting pod on node kubernetes-node2
Oct 11 12:17:11.614: INFO: Deleting pod "vsphere-e2e-vqkbp" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:17:11.624: INFO: Wait up to 5m0s for pod "vsphere-e2e-vqkbp" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk to be detached from the node kubernetes-node2
Oct 11 12:17:55.748: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk" appears to have successfully detached from "kubernetes-node2".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk is attached to the pod kubernetes-node3
STEP: Deleting pod on node kubernetes-node3
Oct 11 12:17:56.051: INFO: Deleting pod "vsphere-e2e-fkrzb" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:17:56.069: INFO: Wait up to 5m0s for pod "vsphere-e2e-fkrzb" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk to be detached from the node kubernetes-node3
Oct 11 12:18:38.199: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk" appears to have successfully detached from "kubernetes-node3".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk is attached to the pod kubernetes-node4
STEP: Deleting pod on node kubernetes-node4
Oct 11 12:18:38.541: INFO: Deleting pod "vsphere-e2e-4cb0d" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:18:38.556: INFO: Wait up to 5m0s for pod "vsphere-e2e-4cb0d" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk to be detached from the node kubernetes-node4
Oct 11 12:19:22.672: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk" appears to have successfully detached from "kubernetes-node4".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk
[AfterEach] [sig-storage] Volume Attach Verify [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 11 12:19:23.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-restart-master-j9x0f" for this suite.
Oct 11 12:19:29.544: INFO: namespace: e2e-tests-restart-master-j9x0f, resource: bindings, ignored listing per whitelist
Oct 11 12:19:29.622: INFO: namespace e2e-tests-restart-master-j9x0f deletion completed in 6.156220683s

• [SLOW TEST:313.535 seconds]
[sig-storage] Volume Attach Verify [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
  verify volume remains attached after master kubelet restart
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:144
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 11 12:19:29.666: INFO: Running AfterSuite actions on all node
Oct 11 12:19:29.666: INFO: Running AfterSuite actions on node 1

Ran 1 of 709 Specs in 313.923 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 708 Skipped PASS

```

Internally reviewed by VMware reviewers @divyenpatel @BaluDontu @tusharnt

**Release note**:
```
None
```
2017-11-08 19:31:03 -08:00
Kubernetes Submit Queue f92f544924
Merge pull request #55275 from shyamjvs/skip-esipp-slow-tests-on-large-cluster
Automatic merge from submit-queue (batch tested with PRs 54177, 55203, 55120, 55275, 55260). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Skip ESIPP [Slow] suite of networking tests for huge clusters

Ref https://github.com/kubernetes/kubernetes/issues/52495#issuecomment-342523340

/cc @MrHohn @kubernetes/sig-network-misc
2017-11-08 18:31:11 -08:00
Kubernetes Submit Queue a701a42a82
Merge pull request #49763 from supereagle/versioned-group-clients
Automatic merge from submit-queue (batch tested with PRs 55331, 55272, 55228, 49763, 55242). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

use versiond group clients from client-go

**What this PR does / why we need it**:
Some **Deprecated** group clients are still used, replace them with versioned group clients.

**Which issue this PR fixes**: fixes #49760

**Special notes for your reviewer**:
/assign @caesarxuchao

**Release note**:
```release-note
NONE
```
2017-11-08 17:13:27 -08:00
Kubernetes Submit Queue 77e5e2f9fc
Merge pull request #54819 from janetkuo/dep-inte-rolling
Automatic merge from submit-queue (batch tested with PRs 54493, 52501, 55172, 54780, 54819). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add integration test for deployment rolling update, rollback, rollover

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: ref #52113

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-11-08 15:41:26 -08:00
Shyam Jeedigunta 79fd1296da Remove hack for CLUSTER_IP_RANGE in e2e framework no longer needed 2017-11-09 00:27:22 +01:00
Shyam Jeedigunta 6b1f24ca1c Add shyamjvs to test/OWNERS 2017-11-08 15:44:56 +01:00
pshahzeb f2a01faeff Test to verify volume attach status after master kubelet restart 2017-11-07 19:34:38 -08:00
Kubernetes Submit Queue 42d5dc709e
Merge pull request #55259 from ironcladlou/gc-partial-discovery
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Tolerate partial discovery in garbage collector

Allow the garbage collector to tolerate partial discovery failures. On a
partial failure, use whatever was discovered, log the failures, and
allow the resync logic to try again later.

Fixes #55022.

```release-note
API discovery failures no longer crash the kube controller manager via the garbage collector.
```

/cc @caesarxuchao
2017-11-07 18:53:51 -08:00
Balu Dontu 0b3e28c883 vSphere scale tests 2017-11-07 15:33:27 -08:00
Kubernetes Submit Queue 3af06ccf5b
Merge pull request #54581 from bsnchan/conformance_host_path_permissions
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Removes 'rwx' permissions for global users

- the tests make an assumption that the permissions on the /tmp dir have not
been altered

Signed-off-by: Brenda Chan <brchan@pivotal.io>



**What this PR does / why we need it**:

This PR modifies a conformance test that checks the file permissions when the`/tmp` dir is mounted.

The current tests make an assumption that the permissions on the `/tmp` dir on the host system has not been altered. We removed the check that global users need `rwx`, so the tests now only check for `dtrwxrwx`


**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: N/A

**Special notes for your reviewer**: N/A

**Release note**:

```release-note
NONE
```
2017-11-07 15:22:27 -08:00
Anthony Yeh 092e7c7b0a
Add ReplicationController integration tests.
These are copied from ReplicaSet integration tests.
2017-11-07 14:55:47 -08:00
Shyam Jeedigunta 9356c78709 Skip ESIPP [Slow] suite of networking tests for huge clusters 2017-11-07 23:23:08 +01:00
Dan Mace c3dd82c30c Tolerate partial discovery in garbage collector
Allow the garbage collector to tolerate partial discovery failures. On a
partial failure, use whatever was discovered, log the failures, and
allow the resync logic to try again later.

Fixes #55022.
2017-11-07 16:54:49 -05:00
Kubernetes Submit Queue e1de2ad507
Merge pull request #52562 from ironcladlou/kube-scheduler-config
Automatic merge from submit-queue (batch tested with PRs 53592, 52562, 55175, 55213). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Refactor kube-scheduler config API, command, and server setup

Refactor the kube-scheduler configuration API, command setup, and server setup according to the guidelines established in #32215 and using the kube-proxy refactor (#34727) as a model of a well factored component adhering to said guidelines.

* Config API: clarify meaning and use of algorithm source by replacing modality derived from bools and string emptiness checks with an explicit AlgorithmSource type hierarchy.
* Config API: consolidate client connection config with common structs.
* Config API: split and simplify healthz/metrics server configuration.
* Config API: clarify leader election configuration.
* Config API: improve defaulting.
* CLI: deprecate all flags except `--config`.
* CLI: port all flags to new config API.
* CLI: refactor to match kube-proxy Cobra command style.
* Server: refactor away configurator.go to clarify application wiring.
* Server: refactor to more clearly separate wiring/setup from running.

Fixes https://github.com/kubernetes/kubernetes/issues/52428.

@kubernetes/api-reviewers 
@kubernetes/sig-cluster-lifecycle-pr-reviews 
@kubernetes/sig-scheduling-pr-reviews 

/cc @ncdc @timothysc @bsalamat

```release-note
The kube-scheduler command now supports a `--config` flag which is the location of a file containing a serialized scheduler configuration. Most other kube-scheduler flags are now deprecated.
```
2017-11-07 11:21:19 -08:00
Kubernetes Submit Queue 576c9118a6
Merge pull request #53592 from frodenas/bootstrap-controller
Automatic merge from submit-queue (batch tested with PRs 53592, 52562, 55175, 55213). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Check RegisterMetricAndTrackRateLimiterUsage error when starting BootstrapSigner & TokenCleaner controllers

**What this PR does / why we need it**:
Prevent `BootstrapSigner` and `TokenCleaner` controllers to start if `metrics.RegisterMetricAndTrackRateLimiterUsage` returns an error.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: complements #53571 

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-11-07 11:21:15 -08:00
Kubernetes Submit Queue d33077526a
Merge pull request #53273 from mikedanese/authtristate
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

add support for short-circuit deny in union authorizer

This change has no behavioral changes.

Fixes https://github.com/kubernetes/kubernetes/issues/51862

```release-note
Add support for the webhook authorizer to make a Deny decision that short-circuits the union authorizer and immediately returns Deny. 
```
2017-11-07 09:25:37 -08:00
Kubernetes Submit Queue ef8746af3d
Merge pull request #55241 from krzysztof-jastrzebski/e2e6
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Adds e2e tests for Node Autoprovisioning:

This PR adds e2e tests for Node Autoprovisioning:  …
 - should create new node if there is no node for node selector
2017-11-07 08:32:48 -08:00
Kubernetes Submit Queue b0ff44bf56
Merge pull request #55198 from yanxuean/unuse-if
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

delete if-else branch

Signed-off-by: yanxuean <yan.xuean@zte.com.cn>

**What this PR does / why we need it**:
The if-else branch is redundant.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-11-07 08:32:37 -08:00
Kubernetes Submit Queue d07bc1485c
Merge pull request #54279 from guangxuli/fix-sig-node-e2e
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Migrate pod relevant e2e tests to sig node

**What this PR does / why we need it**:

Migrate pod relevant e2e tests to sig-node. 

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Ref Umbrella issue #49161

**Special notes for your reviewer**:

**Release note**:
```release-note
none
```
2017-11-07 06:54:20 -08:00
Krzysztof Jastrzebski bdb1e7efa3 Adds e2e tests for Node Autoprovisioning:
- should create new node if there is no node for node selector
2017-11-07 15:43:59 +01:00
Dan Mace efb2bb71cd Refactor scheduler config API
Refactor the kube-scheduler configuration API, command setup, and server
setup according to the guidelines established in #32215 and using the
kube-proxy refactor (#34727) as a model of a well factored component
adhering to said guidelines.

* Config API: clarify meaning and use of algorithm source by replacing
modality derived from bools and string emptiness checks with an explicit
AlgorithmSource type hierarchy.
* Config API: consolidate client connection config with common structs.
* Config API: split and simplify healthz/metrics server configuration.
* Config API: clarify leader election configuration.
* Config API: improve defaulting.
* CLI: deprecate all flags except `--config`.
* CLI: port all flags to new config API.
* CLI: refactor to match kube-proxy Cobra command style.
* Server: refactor away configurator.go to clarify application wiring.
* Server: refactor to more clearly separate wiring/setup from running.

Fixes #52428.
2017-11-07 09:41:39 -05:00
Dan Mace 25ca287707 Update generated files 2017-11-07 09:41:35 -05:00
Kubernetes Submit Queue 0c76d09c81
Merge pull request #55231 from wackxu/uprst
Automatic merge from submit-queue (batch tested with PRs 55061, 55157, 55231). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Remove unused constant

**What this PR does / why we need it**:

These constant is never used so we can remove it safely.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-11-07 06:03:19 -08:00
Kubernetes Submit Queue 947b6730ac
Merge pull request #55061 from krzysztof-jastrzebski/e2e
Automatic merge from submit-queue (batch tested with PRs 55061, 55157, 55231). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Adds e2e tests for Node Autoprovisioning:

Adds e2e tests for Node Autoprovisioning:
     - shouldn't add new node group if not needed
     - shouldn't scale up if cores limit too low, should scale up after limit is changed
2017-11-07 06:03:13 -08:00
wackxu 249439f6a6 remove unused constant 2017-11-07 19:23:44 +08:00
Krzysztof Jastrzebski c8b807837a Adds e2e tests for Node Autoprovisioning:
- shouldn't add new node group if not needed
 - shouldn't scale up if cores limit too low, should scale up after limit is changed
2017-11-07 10:16:50 +01:00
Kubernetes Submit Queue dd00bc65f9
Merge pull request #55122 from MrHohn/fix-session-affinity-e2e
Automatic merge from submit-queue (batch tested with PRs 55114, 52976, 54871, 55122, 55140). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Don't share nodePort service in session affinity tests

**What this PR does / why we need it**:
From https://github.com/kubernetes/kubernetes/issues/54524, https://github.com/kubernetes/kubernetes/issues/54571.

Spent sometime to dig into it today, found this test is flaky mostly because it sends out service requests before kube-proxy reacts on the service session affinity update, hence multiple endpoints are responding instead of one. It is more flaky in alpha CIs probably due to different test sequences.

This PR creates a separate service with `sessionAffinity=ClientIP` so there wouldn't be a race between test begins and kube-proxy reacts. On the other hand, it also seems inappropriate to tweak the`config.NodePortService`, which is shared by other networking tests.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes # (will mark them fixed later).

**Special notes for your reviewer**:
/assign @m1093782566 @bowei 
cc @spiffxp

**Release note**:

```release-note
NONE

```
2017-11-06 23:19:21 -08:00
supereagle b694d51842 use versiond group clients from client-go 2017-11-07 14:47:22 +08:00
Kubernetes Submit Queue 743d11fbd5
Merge pull request #55047 from nikhiljindal/ingressTest
Automatic merge from submit-queue (batch tested with PRs 55093, 54966, 55047, 54971, 54786). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Adding an e2e test for gce multi cluster ingress

Basic test that verifies that multi cluster ingress gets the instance groups annotation.

Ref https://github.com/kubernetes/ingress-gce/issues/71

cc @csbell @G-Harmon @nicksardo @bowei 

```release-note
NONE```
2017-11-06 20:38:56 -08:00
guangxuli 7c7392014d update autogen BUILD files 2017-11-07 11:17:00 +08:00
guangxuli a50bc8e7cb migration pod relevant e2e tests to sig node 2017-11-07 11:16:59 +08:00
yanxuean 00eb0439d4 delete if-else branch
Signed-off-by: yanxuean <yan.xuean@zte.com.cn>
2017-11-07 09:57:59 +08:00
Kubernetes Submit Queue fdeeed1001
Merge pull request #54688 from yanxuean/besteffort
Automatic merge from submit-queue (batch tested with PRs 53645, 54734, 54586, 55015, 54688). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

e2e-node:the value of bestEffortCgroup is wrong

Signed-off-by: yanxuean <yan.xuean@zte.com.cn>

**What this PR does / why we need it**:
The value of bestEffortCgroup is wrong in e2e-node. The test case is invalid actually.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:
```release-note
NONE
```
2017-11-06 15:33:50 -08:00
Kubernetes Submit Queue 67c9e7419c
Merge pull request #54586 from DirectXMan12/bug/fix-incorrect-scale-and-hpa-gvks
Automatic merge from submit-queue (batch tested with PRs 53645, 54734, 54586, 55015, 54688). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix Incorrect Scale Subresources and HPA e2e ScaleTargetRefs

The HPA e2es failed to actually set `apiVersion` on the created HPAs, which previous was ignored.  Since the polymorphic scale client was merged, this behavior is no longer tolerated (it was never correct to begin with, but it accidentally worked).

Additionally, the `apps` resources have their own version of scale.  Until `apps/v1beta1` and `apps/v1beta2` go away, we need to support those versions in the scale client.

Together, these broke some of the HPA e2es.

Fixes #54574

```release-note
NONE
```
2017-11-06 15:33:43 -08:00
Kubernetes Submit Queue 2907168a87
Merge pull request #53645 from xiangpengzhao/fix-kubeproxy-cc
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

move KubeProxyConfiguration out of componentconfig API group

**What this PR does / why we need it**:
move KubeProxyConfiguration out of componentconfig API group

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #53577

**Special notes for your reviewer**:
/cc @thockin @ncdc 

**Release note**:

```release-note
NONE
```
2017-11-06 14:55:02 -08:00
Brendan Creane 1e7f01e9a2 Add named port egress test 2017-11-06 14:34:01 -08:00
nikhiljindal 2e1d61a0d5 Adding an e2e test for gce multi cluster ingress 2017-11-06 13:48:35 -08:00
Kubernetes Submit Queue e6df9abbc8
Merge pull request #55068 from mml/e2e-version
Automatic merge from submit-queue (batch tested with PRs 55034, 55068). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Clarify what each "version" means.

Some folks were getting confused by this output.

Fixes #54821 

```release-note
NONE
```

/area conformance
/sig architecture
/assign @timothysc @WilliamDenniss
2017-11-06 12:29:12 -08:00
Kubernetes Submit Queue a8fc7f691f
Merge pull request #54990 from shyamjvs/retry-pod-list-in-load-test
Automatic merge from submit-queue (batch tested with PRs 55169, 54990). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Retry pod listing call in load test if possible instead of failing

The latest run of 5k-node performance test failed due to this (https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-performance/57):

```
listing pods from rc load-small-10363
Expected error:
    ...
    Get https://35.196.185.248/api/v1/namespaces/e2e-tests-load-30-nodepods-14-f9gcv/pods?labelSelector=name%3Dload-small-10363&resourceVersion=0: read tcp 172.17.0.5:40524->35.196.185.248:443: read: connection reset by peer
not to have occurred
```

/cc @wojtek-t @porridge
2017-11-06 08:27:39 -08:00
Shyam Jeedigunta 2a0b7657c6 Retry pod listing call in load test if possible instead of failing 2017-11-06 15:05:27 +01:00
Kubernetes Submit Queue 824533d217
Merge pull request #55123 from caesarxuchao/remove-binary
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Remove the wrongly checked in binary

This is awkward :(
2017-11-06 05:46:19 -08:00
MrHohn e07a9c4ce6 Don't share nodePort service in session affinity tests 2017-11-05 22:42:33 -08:00
Chao Xu 7430e0a489 remove the wrongly checked in binary 2017-11-05 15:52:16 -08:00
Paulo Pires d2edb8af9e
fix scheduler predicates test that may violate DNS label rules
This commit fixes an issue where in clusters which have FQDN as the node names,
one of the scheduling predicates tests will fail because it will try and run a
pod with a name that violates DNS-1123 rules. As an example, one such pod name
could look like "filler-pod-kube-node-0.kubelet.mesos".

Signed-off-by: Paulo Pires <pjpires@gmail.com>
2017-11-05 16:01:39 +00:00
NickrenREN d379a9a3ff Add downward_api e2e test for LocalStorageCapacityIsolation feature 2017-11-04 12:50:34 +08:00
NickrenREN 6a8d1545e6 Add resource quota e2e test for LocalStorageCapacityIsolation feature 2017-11-04 12:48:38 +08:00
NickrenREN 1633cdde05 Add limitrange e2e test for LocalStorageCapacityIsolation feature 2017-11-04 12:47:34 +08:00
xiangpengzhao 5c8c1f43fa move KubeProxyConfiguration out of componentconfig API group 2017-11-04 11:38:57 +08:00
Kubernetes Submit Queue 2ecb368026
Merge pull request #53679 from kow3ns/workloadsv1
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Workloads V1

**What this PR does / why we need it**: This PR promotes the Deployment, ReplicaSet, and DaemonSet StatefulSet, ControllerRevision kinds to the apps/v1 group version.

https://github.com/kubernetes/features/issues/353

**Special notes for your reviewer**:
There will be at least two followups to this PR. The first to add a scale sub-resource when the correct location is resolved, and the second to deal with Conditions in the workloads API.

While it would have been preferable to move the kinds individually providing a lesser burden on reviewers, this proved impracticable due to the intricacies of version resolution in kubectl for objects of the different kinds in the same group.  

```release-note
DaemonSet, Deployment, ReplicaSet, and StatefulSet have been promoted to GA and are available in the apps/v1 group version.
```
2017-11-03 15:17:16 -07:00
Mike Danese 12125455d8 move authorizers over to new interface 2017-11-03 13:46:28 -07:00
Kubernetes Submit Queue 2a40c48424
Merge pull request #54653 from ihmccreery/metadata-proxy-prom-to-sd
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add prometheus-to-sd-exporter to metadata-proxy addon; bump to v0.1.4

**What this PR does / why we need it**: Add metrics exporters to the metadata-proxy addon for GCE.  Work toward #8867.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-11-03 13:46:09 -07:00
Kubernetes Submit Queue ade0111190
Merge pull request #55050 from xiangpengzhao/clean-kubectl
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Remove version check in kubectl e2e test.

**What this PR does / why we need it**:
We don't need to check these versions for kubectl e2e tests in current cycle.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
ref: #55053

**Special notes for your reviewer**:
/cc @liggitt 
since you're also from sig-cli-maintainers :)

**Release note**:

```release-note
NONE
```
2017-11-03 12:56:08 -07:00
Kubernetes Submit Queue 6f98cc9f6a
Merge pull request #55017 from nikhita/remove-tpr-extensions
Automatic merge from submit-queue (batch tested with PRs 51401, 54056, 54977, 55017, 55052). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

extensions: remove TPR remnants

The extensions group still had the TPR types + generated client. Having this in the codebase doesn't create any problems but would be good to clean up, especially since TPR access has been removed in 1.8.

**Release note**:

```release-note
NONE
```

/assign @sttts @deads2k
2017-11-03 12:08:02 -07:00
Kubernetes Submit Queue ff8acb30d8
Merge pull request #54977 from zouyee/e2e
Automatic merge from submit-queue (batch tested with PRs 51401, 54056, 54977, 55017, 55052). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

[test/e2e_node]Redirect dl.k8s.io to the kubernetes-release GCS bucket

**What this PR does / why we need it**:
fixes [#33726](https://github.com/kubernetes/kubernetes/pull/33726)Redirect dl.k8s.io to the kubernetes-release GCS bucket
ref :[kubernetes/k8s.io#15](https://github.com/kubernetes/k8s.io/pull/15)

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```
NONE
```
2017-11-03 12:07:58 -07:00
Isaac Hollander McCreery be8aaf9ff8 Add prometheus-to-sd-exporter to metadata-proxy addon; bump to proxy to v0.1.4 and e2e to v0.0.2; remove configmag 2017-11-03 10:23:05 -07:00
Matt Liggett c64a5e8620 Clarify what each "version" means.
Some folks were getting confused by this output.

Fixes #54821
2017-11-03 10:11:06 -07:00
Kubernetes Submit Queue 92952cfe77
Merge pull request #55053 from xiangpengzhao/version-check-auth
Automatic merge from submit-queue (batch tested with PRs 55063, 54523, 55053). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Don't need to check version for auth e2e test

**What this PR does / why we need it**:
In 1.9 cycle, some e2e test don't need to run against so older versions.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
ref: #55050

**Special notes for your reviewer**:
/cc @tallclair @liggitt

**Release note**:

```release-note
NONE
```
2017-11-03 10:00:15 -07:00
xiangpengzhao 8a24b964c0 Auto generated BUILD file 2017-11-04 00:04:37 +08:00
xiangpengzhao eb27e1c471 Remove version check for kubectl portfoward. 2017-11-04 00:03:34 +08:00
Nikhita Raghunath a58c171bea remove tpr from test_owners.csv 2017-11-03 21:17:10 +05:30
Nikhita Raghunath 3b0b95ecbf Remove TPR remnants
There are still TPR types and generated client
in the extensions group. It is better to clean
that up, now that it has been removed from master.
2017-11-03 21:15:58 +05:30
xiangpengzhao 32675e6f62 Remove check for SubResourcePodProxyVersion and SubResourceServiceAndNodeProxyVersion 2017-11-03 23:11:09 +08:00
Marcin Owsiany c2ab5c8246 Fix a typo. 2017-11-03 13:43:32 +01:00
Krzysztof Jastrzebski 7a5e9582bc Add scale down to 1 e2e test. 2017-11-03 11:48:37 +01:00
Kubernetes Submit Queue aa66d8cb98
Merge pull request #54991 from krzysztof-jastrzebski/master
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Node autoprovisioning e2e test.

This PR adds test scenario for cluster-autoscaler in GKE  for node autoprovisioning.
2017-11-03 03:19:17 -07:00
xiangpengzhao 026197fb04 Auto generated BUILD file 2017-11-03 16:55:36 +08:00
xiangpengzhao c7ce2f6a37 Don't need to check version for auth e2e test 2017-11-03 16:53:52 +08:00
xiangpengzhao 0242de0c5d Remove version check in kubectl e2e test. 2017-11-03 15:28:37 +08:00
Solly Ross 2c9fc43294 [client-go] Add apps.Scale support to Scale client
apps/v1betaX inadventertently contains its own variant of Scale.  In
order to support scaling Deployments, ReplicaSets, etc, we need to support
these versions of Scale as well.
2017-11-02 22:20:39 -04:00
Janet Kuo 26afcdbe73 Add integration test for deployment rolling update, rollback, rollover 2017-11-02 14:22:03 -07:00
Janet Kuo 98d26aeb23 Wait for markPodsReady goroutine to finish 2017-11-02 14:22:03 -07:00
Janet Kuo 3233a07ec1 Integration test keeps marking pods ready until deployment is complete 2017-11-02 14:22:03 -07:00
Kenneth Owens 26bf978c07 Promotes the StatefulSet, ControllerRevision, Deployment, and ReplicaSet kinds to the apps/v1 group version. 2017-11-02 14:19:04 -07:00
Solly Ross 32b47a36f6 [e2e] make sure to specify APIVersion in HPA tests
Previously, the HPA controller ignored APIVersion when resolving the
scale subresource for a kind, meaning if it was set incorrectly in the
HPA's scaleTargetRef, it would not matter.  This was the case for
several of the HPA e2e tests.

Since the polymorphic scale client merged, APIVersion now matters.  This
updates the HPA e2es to care about APIVersion, by passing kind as a full
GroupVersionKind, and not just a string.
2017-11-02 17:14:46 -04:00