Commit Graph

9314 Commits (ba09291ba785b0b91eb22f6cce82ae6392d0d8b3)

Author SHA1 Message Date
Jiaying Zhang 4a1a205109 Changes nvidia-gpu device plugin addon config settings:
- Runs as system critical pod
- Makes resource limits to match its resource requets
- Modifies test/e2e/scheduling/nvidia-gpus.go to cope with the recent
change of running the device plugin as a system addon.
- The resource settings of the addon is based on the test results
from 8 nvidia-tesla-k80 gpus.
2017-11-20 17:32:53 -08:00
Jing Xu 75ef18c4d3 Add Pod-level local ephemeral storage metric in Summary API
This PR adds pod-level ephemeral storage metric into Summary API.
Pod-level ephemeral storage usage is the sum of all containers and local
ephemeral volume including EmptyDir (if not backed up by memory or
hugepages), configueMap, and downwardAPI.
2017-11-20 16:32:38 -08:00
David Zhu e5aec8645d Changed GetAllZones to only get zones with nodes that are currently
running (renamed to GetAllCurrentZones). Added E2E test to confirm this
behavior.

Added node informer to cloud-provider controller to keep track of zones
with k8s nodes in them.
2017-11-20 16:04:18 -08:00
Jun Xiang Tee 25469e9b44 convert testScaledRolloutDeployment e2e test to integration test 2017-11-20 15:36:27 -08:00
Kubernetes Submit Queue 2cbb07a439
Merge pull request #55871 from atlassian/unstructured-converter-no-mutation
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix potential unexpected object mutation that can lead to data races

**What this PR does / why we need it**:
In #51526 I introduced an optimization - do a deep copy instead of to and from JSON roundtrip to convert anything that implements `runtime.Unstructured`. I just discovered that the method that is used there `UnstructuredContent()` in both `Unstructured` and `UnstructuredList` may mutate the original object.
2008750398/staging/src/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured/unstructured.go (L87-L92)
7c10cbc642/staging/src/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured/unstructured_list.go (L58-L75)
This is problematic because previously (before #51526) there was no mutation and because this is unexpected and may lead to data races - it is bad behaviour to mutate original object when you just want a copy of it.
This PR fixes the issue.

Without the fix the tests I've added are failing because when comparison is done original object is not the same:
```
converter_test.go:154: Object changed, diff: 
object.Object[items]:
  a: []interface {}{}
  b: <nil>
converter_test.go:154: Object changed, diff: 
object.Object[items]:
  a: []interface {}{map[string]interface {}{"kind":"Pod"}}
  b: <nil>
```

However the underlying issue is not fixed here - `UnstructuredContent()` is brittle and dangerous. Method name does not imply that it mutates data when you call it. And godoc does not mention that either:
509df603b1/staging/src/k8s.io/apimachinery/pkg/runtime/interfaces.go (L233-L249)
Something needs to be done about it IMO.
Also `UnstructuredContent()` implementation in `UnstructuredList` does not implement the behaviour required by godoc in `runtime.Unstructured`.

**Release note**:
```release-note
NONE
```
/kind bug
/sig api-machinery
/assign @sttts
2017-11-20 08:58:37 -08:00
Kubernetes Submit Queue d4724d7e43
Merge pull request #55056 from porridge/typo-percentil
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix a typo.

**Release note**:
```release-note
NONE
```
2017-11-20 01:40:50 -08:00
Kubernetes Submit Queue dcdb423ef4
Merge pull request #55186 from bcreane/named-port-egress
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

NetworkPolicy e2e: named port egress test

**What this PR does / why we need it**:
Add an e2e NetworkPolicy test that ensures that an egress rule that specifies a named port properly applies to egress traffic.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #52040

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-11-19 19:57:17 -08:00
Mikhail Mazurskiy 3e342077d5
Fix potential unexpected object mutation that can lead to data races 2017-11-19 08:54:25 +11:00
Kubernetes Submit Queue 3679b54b19
Merge pull request #55898 from dashpole/fix_flaky_allocatable
Automatic merge from submit-queue (batch tested with PRs 54837, 55970, 55912, 55898, 52977). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix Flaky Allocatable Setup Tests

**What this PR does / why we need it**:
Fixes a flaky node e2e serial test.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #55830

**Special notes for your reviewer**:
The test was flaking because we were reading the node status before the restarted kubelet had written it.
This fixes this by waiting until we see an updated node status (looking at the condition's heartbeat time).
This also fixes an incorrect error message.

**Release note**:
```release-note
NONE
```
2017-11-18 13:13:24 -08:00
Kubernetes Submit Queue 7d1085e122
Merge pull request #54837 from xiangpengzhao/conf-test
Automatic merge from submit-queue (batch tested with PRs 54837, 55970, 55912, 55898, 52977). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Use framework.ConformanceIt for node e2e conformance tests

**What this PR does / why we need it**:

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
ref #54726 #53909

**Special notes for your reviewer**:
/cc @mml 

**Release note**:

```release-note
NONE
```
2017-11-18 13:13:17 -08:00
Kubernetes Submit Queue 87d45a54bd
Merge pull request #55940 from shyamjvs/reduce-spam-from-resource-gatherer
Automatic merge from submit-queue (batch tested with PRs 55233, 55927, 55903, 54867, 55940). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Control logs verbosity in resource gatherer

PR https://github.com/kubernetes/kubernetes/pull/53541 added some logging in resource gatherer which is a bit too verbose for normal purposes.
As a result, we're seeing a lot of spam in our large cluster performance tests (e.g - https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-scalability/8046/build-log.txt)

This PR is making the verbosity of those logs controllable through an option. It's off by default, but turning it on for the gpu test to preserve behavior there.

/cc @jiayingz @mindprince
2017-11-18 12:26:18 -08:00
Kubernetes Submit Queue 941c6aa1db
Merge pull request #55835 from smarterclayton/table_printer_meta
Automatic merge from submit-queue (batch tested with PRs 55642, 55897, 55835, 55496, 55313). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Table printers and server generation should always copy ListMeta

Tables should be a mapping from lists, so if the incoming object has these add them to the table. Paging over server side tables was broken without this. Add tests on the generic creater and on the resttest compatibility.


@deads2k
2017-11-18 10:46:35 -08:00
Kubernetes Submit Queue ef3b27cbd4
Merge pull request #55642 from dashpole/disable_cadvisor_disk_for_cri
Automatic merge from submit-queue (batch tested with PRs 55642, 55897, 55835, 55496, 55313). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Disable container disk metrics when using the CRI stats integration

Issue: https://github.com/kubernetes/kubernetes/issues/51798

As explained in the issue, runtimes which make use of the CRI Stats API still have the performance overhead of collecting those same stats through cAdvisor.
The CRI Stats API has metrics for CPU, Memory, and Disk.  This PR significantly reduces the added overhead due to collecting these stats in both cAdvisor and in the runtime.
This PR disables container disk metrics, which are very expensive to collect.

This PR does not disable node-level disk stats, as the "Raw" container handler does not currently respect ignoring DiskUsageMetrics.
This PR factors out the logic for determining whether or not to use the CRI stats provider into a helper function, as cAdvisor is instantiated before it is passed to the kubelet as a dependency.

cc @kubernetes/sig-node-pr-reviews @derekwaynecarr  
/kind feature
/sig node

/assign @Random-Liu @derekwaynecarr
2017-11-18 10:46:30 -08:00
David Ashpole 527611ee41 remove disk allocatable evictions 2017-11-18 10:34:59 -08:00
Kubernetes Submit Queue 2d972c19bf
Merge pull request #55737 from mindprince/update-nvidia-urls
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Update URLs for nvidia gpu device plugin and nvidia driver installer.

Device plugin is now an addon and its manifest is now in kubernetes/kubernetes. The manifest on
GoogleCloudPlatform/container-engine-accelerators no longer contains device plugin.

This is needed after https://github.com/kubernetes/kubernetes/pull/54826 and https://github.com/GoogleCloudPlatform/container-engine-accelerators/pull/25

**Release note**:
```release-note
NONE
```

/sig scheduling
2017-11-18 09:36:05 -08:00
Kubernetes Submit Queue 2a711199db
Merge pull request #55705 from krzysztof-jastrzebski/e2e
Automatic merge from submit-queue (batch tested with PRs 54556, 55379, 55881, 55891, 55705). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Adds node auto-repair e2e tests.

This PR adds node auto-repair e2e tests.
2017-11-18 07:53:48 -08:00
Chao Xu 0b3ee54076 fix webhook e2e test cleanup 2017-11-17 21:02:47 -08:00
Chao Xu 6193360eb5 generated bazel 2017-11-17 21:02:47 -08:00
Chao Xu ea123f82aa Adding the mutating webhook 2017-11-17 21:02:47 -08:00
Kubernetes Submit Queue 2aaab817de
Merge pull request #55420 from cblecker/go1.9.2
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Upgrade to go1.9.2

**What this PR does / why we need it**:
Use go1.9.2, containing a number of bug fixes: https://github.com/golang/go/issues?q=milestone%3AGo1.9.2

**Release note**:
```release-note
Upgrade to go1.9.2
```
2017-11-17 20:24:42 -08:00
Christoph Blecker 82737e730c
Upgrade to go1.9.2 2017-11-17 16:27:17 -08:00
rohitjogvmw 79e1da68d2 Updating vSphere Cloud Provider (VCP) to support k8s cluster spead across multiple ESXi clusters, datacenters or even vSphere vCenters
- vsphere.conf (cloud-config) is now needed only on master node
   - VCP uses OS hostname and not vSphere inventory name
   - VCP is now resilient to VM inventory name change and VM migration
2017-11-17 14:49:32 -08:00
cheftako dac3c2e168 Admission request/response handling
AdmissionResponse allows mutating webhook to send apiserver a json patch
to mutate the object.
This reflects the imperative nature of AdmissionReview. It adds
AdmissionRequest and AdmissionResponse in place of status/spec.
The AdmissionResponse the allows the mutating webhook
to send back a json path with the mutated version of the requested
object.
Fixed the integration test to clean up properly.
Switched test image to 1.8v5 to reflect API changes.
Make sure to cache test framework client for cleaup test code.
Switched to pointer for patch type.
Factored in @liggitt's feedback.
Factored in @lavalamp's feedback.
2017-11-17 14:22:55 -08:00
Kubernetes Submit Queue 0881a2281e
Merge pull request #55525 from miaoyq/fixes-55505
Automatic merge from submit-queue (batch tested with PRs 55254, 55525, 50108, 54674, 55263). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Correct clean up actions in e2e tests

**What this PR does / why we need it**:
Remove the duplicate "cleanup action" code in `test/e2e/e2e.go`, and use the clean up code in test/e2e/framework instead.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #55505 

**Special notes for your reviewer**:

**Release note**:

```release-note

```
2017-11-17 13:34:08 -08:00
David Zhu f780eefd39 Set up alternate mount point for SCSI/NVMe local SSD by UUID in /mnt/disks/by-uuid/, set up ability to have unformatted disk symlinked in /dev/disk/by-uuid/. Added tests. Preserved backwards compatibility. 2017-11-17 10:56:48 -08:00
Clayton Coleman 8db90f1ee6
API chunking tests should fail if limit is breached
Chunking is now beta and on by default. The kops job is still using
etcd2 which does not support chunking, so flag the test as skipped until
kops is updated to a supported etcd version.
2017-11-17 10:30:35 -05:00
Clayton Coleman d2a62fd422 Table printers and server generation should always copy ListMeta
Tables should be a mapping from lists, so if the incoming object has
these add them to the table. Allows paging over server side tables.
Add tests on the generic creater and on the resttest compatibility.
2017-11-17 10:30:32 -05:00
Shyam Jeedigunta fce28995e1 Control logs verbosity in resource gatherer 2017-11-17 13:03:32 +01:00
Kubernetes Submit Queue 00fe2cfe6c
Merge pull request #54823 from mtaufen/structure-eviction-thresholds
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Lift embedded structure out of eviction-related KubeletConfiguration fields

- Changes the following KubeletConfiguration fields from `string` to
`map[string]string`:
  - `EvictionHard`
  - `EvictionSoft`
  - `EvictionSoftGracePeriod`
  - `EvictionMinimumReclaim`
- Adds flag parsing shims to maintain Kubelet's public flags API, while
enabling structured input in the file API.
- Also removes `kubeletconfig.ConfigurationMap`, which was an ad-hoc flag
parsing shim living in the kubeletconfig API group, and replaces it
with the `MapStringString` shim introduced in this PR. Flag parsing
shims belong in a common place, not in the kubeletconfig API.
I manually audited these to ensure that this wouldn't cause errors
parsing the command line for syntax that would have previously been
error free (`kubeletconfig.ConfigurationMap` was unique in that it
allowed keys to be provided on the CLI without values. I believe this was
done in `flags.ConfigurationMap` to facilitate the `--node-labels` flag,
which rightfully accepts value-free keys, and that this shim was then
just copied to `kubeletconfig`). Fortunately, the affected fields
(`ExperimentalQOSReserved`, `SystemReserved`, and `KubeReserved`) expect
non-empty strings in the values of the map, and as a result passing the
empty string is already an error. Thus requiring keys shouldn't break
anyone's scripts.
- Updates code and tests accordingly.

Regarding eviction operators, directionality is already implicit in the
signal type (for a given signal, the decision to evict will be made when
crossing the threshold from either above or below, never both). There is
no need to expose an operator, such as `<`, in the API. By changing
`EvictionHard` and `EvictionSoft` to `map[string]string`, this PR
simplifies the experience of working with these fields via the
`KubeletConfiguration` type. Again, flags stay the same.

Other things:
- There is another flag parsing shim, `flags.ConfigurationMap`, from the
shared flag utility. The `NodeLabels` field still uses
`flags.ConfigurationMap`. This PR moves the allocation of the
`map[string]string` for the `NodeLabels` field from
`AddKubeletConfigFlags` to the defaulter for the external
`KubeletConfiguration` type. Flags are layered on top of an internal
object that has undergone conversion from a defaulted external object,
which means that previously the mere registration of flags would have
overwritten any previously-defined defaults for `NodeLabels` (fortunately
there were none).

Related: #53833 (lifting embedded structures out of string fields is part of getting this API to beta)

```release-note
The EvictionHard, EvictionSoft, EvictionSoftGracePeriod, EvictionMinimumReclaim, SystemReserved, and KubeReserved fields in the KubeletConfiguration object (kubeletconfig/v1alpha1) are now of type map[string]string, which facilitates writing JSON and YAML files.
```
2017-11-17 02:57:30 -08:00
xiangpengzhao 6318fcca85 Update BUILD file to include e2e_node tests 2017-11-17 17:28:29 +08:00
xiangpengzhao 025f946784 Update conformance testdata for e2e node conformance tests 2017-11-17 17:28:28 +08:00
xiangpengzhao 7fdea2b0cf Use framework.ConformanceIt for node e2e conformance tests 2017-11-17 17:28:20 +08:00
Kubernetes Submit Queue ebd3d68039
Merge pull request #55831 from Random-Liu/rename-log-dump-env
Automatic merge from submit-queue (batch tested with PRs 55392, 55491, 51914, 55831, 55836). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Rename log-dump env to `LOG_DUMP_SYSTEMD_SERVICES`.

For https://github.com/kubernetes/features/issues/286.

Rename `SYSTEMD_SERVICES` to `LOG_DUMP_SYSTEMD_SERVICES`. test-infra disables log dump in our e2e framework, and uses a different log dump logic https://github.com/kubernetes/test-infra/blob/master/kubetest/e2e.go#L480-L497. So the flags we added in https://github.com/kubernetes/kubernetes/pull/55288 will not work in test-infra.

Fortrunately, test-infra is using the same script `cluster/log-dump/log-dump.sh`, so we could still configure systemd services by setting the environment variable globally.

The original environment variable name is too general for setting globally, change it to a more specific name.

**Release note**:

```release-note
none
```
2017-11-17 00:18:25 -08:00
Kubernetes Submit Queue 8413f36aa3
Merge pull request #55392 from sttts/sttts-remove-policy-v1alpha1
Automatic merge from submit-queue (batch tested with PRs 55392, 55491, 51914, 55831, 55836). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Remove unused pkg/apis/policy/v1alpha1
2017-11-17 00:18:17 -08:00
Michael Taufen 1085b6f730 Lift embedded structure out of eviction-related KubeletConfiguration fields
- Changes the following KubeletConfiguration fields from `string` to
`map[string]string`:
  - `EvictionHard`
  - `EvictionSoft`
  - `EvictionSoftGracePeriod`
  - `EvictionMinimumReclaim`
- Adds flag parsing shims to maintain Kubelet's public flags API, while
enabling structured input in the file API.
- Also removes `kubeletconfig.ConfigurationMap`, which was an ad-hoc flag
parsing shim living in the kubeletconfig API group, and replaces it
with the `MapStringString` shim introduced in this PR. Flag parsing
shims belong in a common place, not in the kubeletconfig API.
I manually audited these to ensure that this wouldn't cause errors
parsing the command line for syntax that would have previously been
error free (`kubeletconfig.ConfigurationMap` was unique in that it
allowed keys to be provided on the CLI without values. I believe this was
done in `flags.ConfigurationMap` to facilitate the `--node-labels` flag,
which rightfully accepts value-free keys, and that this shim was then
just copied to `kubeletconfig`). Fortunately, the affected fields
(`ExperimentalQOSReserved`, `SystemReserved`, and `KubeReserved`) expect
non-empty strings in the values of the map, and as a result passing the
empty string is already an error. Thus requiring keys shouldn't break
anyone's scripts.
- Updates code and tests accordingly.

Regarding eviction operators, directionality is already implicit in the
signal type (for a given signal, the decision to evict will be made when
crossing the threshold from either above or below, never both). There is
no need to expose an operator, such as `<`, in the API. By changing
`EvictionHard` and `EvictionSoft` to `map[string]string`, this PR
simplifies the experience of working with these fields via the
`KubeletConfiguration` type. Again, flags stay the same.

Other things:
- There is another flag parsing shim, `flags.ConfigurationMap`, from the
shared flag utility. The `NodeLabels` field still uses
`flags.ConfigurationMap`. This PR moves the allocation of the
`map[string]string` for the `NodeLabels` field from
`AddKubeletConfigFlags` to the defaulter for the external
`KubeletConfiguration` type. Flags are layered on top of an internal
object that has undergone conversion from a defaulted external object,
which means that previously the mere registration of flags would have
overwritten any previously-defined defaults for `NodeLabels` (fortunately
there were none).
2017-11-16 18:35:13 -08:00
Yanqiang Miao 16aa5820fb Correct clean up actions in e2e tests 2017-11-17 08:46:21 +08:00
David Ashpole 8f3e2f315e fix flaky allocatable test 2017-11-16 11:16:58 -08:00
Connor Doyle 80ac705ef3 Removed opaque integer resources. 2017-11-16 10:47:40 -08:00
Krzysztof Jastrzebski a5446bedf9 Adds node auto-repair e2e tests. 2017-11-16 18:57:25 +01:00
Mike Danese 0117006a54
Revert "Add options for mounting SCSI or NVMe local SSD though Block or Filesystem and do all of that with UUID" 2017-11-16 07:51:38 -08:00
Kubernetes Submit Queue ff5cea4b43
Merge pull request #55868 from shyamjvs/kubemark-resource-gatherer-fix
Automatic merge from submit-queue (batch tested with PRs 55868, 55393, 55152, 55849). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Set resource-gathering and probe-duration period for kubemark

Ref https://github.com/kubernetes/kubernetes/issues/55818#issuecomment-344888480

/cc @porridge 
fyi - @jiayingz
2017-11-16 06:32:16 -08:00
Nikita Komarov c77923d0fe LimitRange e2e test improved. 2017-11-16 16:46:41 +03:00
Kubernetes Submit Queue fbcb199fe5
Merge pull request #55865 from krzysztof-jastrzebski/e2e9
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Stop executing Pod Priority and Preemption e2e tests on GKE.
2017-11-16 04:38:26 -08:00
Kubernetes Submit Queue c2dd10e263
Merge pull request #51905 from jsafrane/mount-propagation-test
Automatic merge from submit-queue (batch tested with PRs 55697, 55631, 51905, 55647, 55826). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add e2e test for mount propagation

**What this PR does / why we need it**:
This adds e2e test for mount propagation introduced by #46444.

@kubernetes/sig-node-pr-reviews 
/sig node

**Release note**:
```release-note
None
```
2017-11-16 03:57:30 -08:00
Kubernetes Submit Queue 7db195cc0f
Merge pull request #55697 from fisherxu/e2efix
Automatic merge from submit-queue (batch tested with PRs 55697, 55631, 51905, 55647, 55826). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix failed to access service of e2e test

**What this PR does / why we need it**:
We should create service before deployments as said in the issue.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes [#55696](https://github.com/kubernetes/kubernetes/issues/55696)

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-11-16 03:57:22 -08:00
Kubernetes Submit Queue 779105673a
Merge pull request #55188 from mindprince/accelerator-monitoring
Automatic merge from submit-queue (batch tested with PRs 55798, 49579, 54862, 55188, 51990). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add monitoring support for hardware accelerators

Currently only NVIDIA GPU monitoring is implemented.

Feature repo issue: https://github.com/kubernetes/features/issues/369
cAdvisor PR: https://github.com/google/cadvisor/pull/1762

/kind feature
/sig node
/sig instrumentation
/area hw-accelerators

**Release note**:
```release-note
Kubelet now exposes metrics for NVIDIA GPUs attached to the containers.
```
2017-11-16 03:09:21 -08:00
Kubernetes Submit Queue f9ce9d9da6
Merge pull request #55798 from shyamjvs/exclude-for-scale-tests-tag
Automatic merge from submit-queue (batch tested with PRs 55798, 49579, 54862, 55188, 51990). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add special tag for disabling ESIPP and HPA-related tests on large clusters

As discussed offline, this would help improve accountability for tests needing some love from scalability perspective.

/cc @porridge 
fyi - @MrHohn @MaciekPytel @mwielgus @crassirostris 

@kubernetes/sig-scalability-misc
2017-11-16 03:09:07 -08:00
Shyam Jeedigunta 1ae56bbe2b Set resource-gathering and probe-duration period for kubemark 2017-11-16 12:02:56 +01:00
Krzysztof Jastrzebski a8f8e16694 Stop executing Pod Priority and Preemption e2e tests on GKE. 2017-11-16 11:27:48 +01:00
Kubernetes Submit Queue ee2cf0bb5d
Merge pull request #55782 from x13n/addon-manager
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Bump addon manager version used to 6.5

**What this PR does / why we need it**:
Bump addon manager version to use #55466. This adds leader election-like mechanism to addon manager.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:

**Special notes for your reviewer**:
Release note copied from #55466. This is intended to be cherrypicked into 1.7 and 1.8 branches.

**Release note**:

```release-note
Addon manager supports HA masters.
```
2017-11-16 00:55:58 -08:00
Kubernetes Submit Queue d73157ba97
Merge pull request #55444 from msau42/multi-e2e
Automatic merge from submit-queue (batch tested with PRs 55682, 55444, 55456, 55717, 55131). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add sig storage label to multizone static PV test

**What this PR does / why we need it**:
Adds sig storage tag to e2e test so it shows up on our testgrid dashboard

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-11-15 23:06:10 -08:00
Kubernetes Submit Queue b3a1867529
Merge pull request #55764 from Random-Liu/wait-server-resources
Automatic merge from submit-queue (batch tested with PRs 55764, 55683, 55468, 54409, 55546). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Wait for server resources.

For https://github.com/kubernetes/kubernetes/issues/55768.

In e2e test for containerd, I sometimes see the following fail (e.g. https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-cri-containerd-e2e-gci-gce/178):
```
Nov 15 02:40:31.291: Couldn't delete ns: "e2e-tests-container-probe-dcvlw": unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server could not find the requested resource (&discovery.ErrGroupDiscoveryFailed{Groups:map[schema.GroupVersion]error{schema.GroupVersion{Group:"metrics.k8s.io", Version:"v1beta1"}:(*errors.StatusError)(0xc420bfd170)}})
```
Usually, only the first few tests fail with this error. The error seems to be returned at this line https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/util.go#L1170.

@cheftako @caesarxuchao Does this change make sense to you? Or should I wait for something else to become ready?
/cc @kubernetes/sig-api-machinery-pr-reviews 

**Release note**:

```release-note
none
```
2017-11-15 22:15:52 -08:00
Kubernetes Submit Queue c3ed0f2663
Merge pull request #53466 from davidz627/localSSDUUID
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add options for mounting SCSI or NVMe local SSD though Block or Filesystem and do all of that with UUID

Fixes: #51431

Mount SCSI local SSD by UUID in /mnt/disks/by-uuid/, also allows for users to request and mount NVMe disks. Both types of disks will be accessable either through block or filesystem

To see code in progress for NVMe and block support see working branch: https://github.com/davidz627/kubernetes/tree/localExt
2017-11-15 18:25:30 -08:00
Lantao Liu e504e5a316 Wait for server resources. 2017-11-16 01:38:35 +00:00
Lantao Liu 0085e2208d Rename log-dump env to `LOG_DUMP_SYSTEMD_SERVICES`. 2017-11-16 00:41:27 +00:00
Kubernetes Submit Queue ded83878c1
Merge pull request #55820 from shyamjvs/restore-resource-gatherer-pollperiod-default
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Restore default polling period of resource-gatherer

Fixes https://github.com/kubernetes/kubernetes/issues/55818

/cc @jiayingz @mindprince
2017-11-15 16:03:06 -08:00
Shyam Jeedigunta a350825612 Restore default polling period of resource-gatherer 2017-11-15 23:15:28 +01:00
Kubernetes Submit Queue cbdd18eee9
Merge pull request #55484 from bskiba/multizone-size-e2e
Automatic merge from submit-queue (batch tested with PRs 54436, 53148, 55153, 55614, 55484). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Support multizone clusters in GCE and GKE e2e tests

**What this PR does / why we need it**:
For multi-zone clusters we can't rely on zone parameter for fetching information on Instance Groups. Instead we first fetch the zone the group is in to use in subsequent calls.

Note that current version of the code does not work for multi zone clusters at all.

**Release note**:
```
NONE
```
2017-11-15 12:58:11 -08:00
Kubernetes Submit Queue a15fde49b4
Merge pull request #55639 from yguo0905/cloud-init
Automatic merge from submit-queue (batch tested with PRs 55648, 55274, 54982, 51955, 55639). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Node e2e: add a cloud-init script to disable live-restore in node e2e test

This cloud-init config will be used in tests in https://github.com/kubernetes/test-infra.

**Release note**:

```
None
```

/assign @yujuhong 
/cc @abgworrall @dchen1107
2017-11-15 12:03:44 -08:00
Kubernetes Submit Queue 9058769dad
Merge pull request #51955 from danwinship/update-networkpolicy-storage
Automatic merge from submit-queue (batch tested with PRs 55648, 55274, 54982, 51955, 55639). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Swap NetworkPolicy storage to networking.k8s.io/v1

Finishes(?) the NetworkPolicy v1 migration.
Fixes #50604

The integration test passes. I copied the test-update-storage-objects.sh change from #50327 and have no idea if it's right.

/cc @sttts @caesarxuchao @thockin

**Release note**:
```release-note
```
2017-11-15 12:03:40 -08:00
Kubernetes Submit Queue c339a54b53
Merge pull request #55659 from CaoShuFeng/duplicated_import
Automatic merge from submit-queue (batch tested with PRs 53780, 55663, 55321, 52421, 55659). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

remove duplicated import

**Release note**:
```release-note
NONE
```
2017-11-15 09:30:40 -08:00
Kubernetes Submit Queue b623026d2a
Merge pull request #52421 from WIZARD-CXY/fixpredicate
Automatic merge from submit-queue (batch tested with PRs 53780, 55663, 55321, 52421, 55659). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

add hostip and protocol to the hostport predicates

**What this PR does / why we need it**:
This PR adds "hostIP and protocol" to scheduler hostport predicate procedure
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fix #51950 
**Special notes for your reviewer**:
- [x] basic implementation, need review
- [x] e2e test
- [x] update doc (will be done in seperate PR)

**Release note**:

```release-note
add hostIP and protocol to the original hostport predicates procedure in scheduler.
```
2017-11-15 09:30:36 -08:00
Shyam Jeedigunta d08a14819c Add special tag for disabling ESIPP and HPA-related tests on large clusters 2017-11-15 14:35:44 +01:00
Daniel Kłobuszewski c2ec85e064 Bump addon manager version used to 6.5 2017-11-15 11:34:46 +01:00
Marcin Owsiany 9b6590e7ae Improve messages around waiting for pods. 2017-11-15 11:29:52 +01:00
Kubernetes Submit Queue ebe8ea73fd
Merge pull request #54463 from saad-ali/volumeAttachmentAPI
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Introduce new `VolumeAttachment` API Object

**What this PR does / why we need it**:

Introduce a new `VolumeAttachment` API Object. This object will be used by the CSI volume plugin to enable external attachers (see design [here](https://github.com/kubernetes/community/pull/1258). In the future, existing volume plugins can be refactored to use this object as well.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:  Part of issue https://github.com/kubernetes/features/issues/178

**Special notes for your reviewer**:
None

**Release note**:

```release-note
NONE
```
2017-11-14 22:05:27 -08:00
Yang Guo 7eb7cfe3ef Add a cloud-init script to disable live-restore 2017-11-14 21:40:13 -08:00
David Zhu 028258244c Set up alternate mount point for SCSI/NVMe local SSD by UUID in /mnt/disks/by-uuid/, set up ability to have unformatted disk symlinked in /dev/disk/by-uuid/. Added tests. Preserved backwards compatibility. 2017-11-14 17:14:41 -08:00
Saad Ali d96c105d71 Introduce storage v1alpha1 and VolumeAttachment
Introduce the v1alpha1 version to the Kubernetes storage API. And add a
new VolumeAttachment object to that version. This object will initially
be used only by the new CSI Volume Plugin. Eventually existing volume
plugins can be refactored to use it too.
2017-11-14 17:08:48 -08:00
Rohit Agarwal 3ac94a57eb Update URLs for nvidia gpu device plugin and nvidia driver installer.
Device plugin is now an addon and its manifest is now in
kubernetes/kubernetes. The manifest on
GoogleCloudPlatform/container-engine-accelerators no longer contains
device plugin.
2017-11-14 15:31:22 -08:00
Dan Winship d2a3af9b58 Swap NetworkPolicy storage to networking.k8s.io/v1 2017-11-14 15:15:01 -05:00
Janet Kuo 6432422307 Webhook e2e test: fail open and fail closed 2017-11-14 12:11:46 -08:00
David Ashpole 220edbc6e3 disable container disk metrics when using the CRI stats integration 2017-11-14 11:43:08 -08:00
Kubernetes Submit Queue 48d062722b
Merge pull request #55605 from bskiba/e2e-fix
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Regional support in Cluster Autoscaler e2e tests.

**What this PR does / why we need it**:

When calling GKE API and gcloud in Autoscaling e2 tests, take into account that clusters can be regional.
This currently uses MultiZonal as an indicator that cluster is regional, which is suboptimal, but considering that our tests do not work with multizonal clusters at the moment, there is no regression.This should be changed once there is an indicator available that the cluster is regional.

**Release note**:
```
NONE
```
2017-11-14 05:13:03 -08:00
Dr. Stefan Schimanski 3ba9d1d0e0 Remove unused pkg/apis/policy/v1alpha1 2017-11-14 13:47:29 +01:00
fisherxu fe033a4714 fix failed to access service of e2e test 2017-11-14 19:21:59 +08:00
Cao Shufeng 86968e44d0 remove duplicated import 2017-11-14 17:18:17 +08:00
Jan Safranek 4e9068b135 Review fixes 2017-11-14 10:16:30 +01:00
Jan Safranek a59af81e5e Add e2e test for mount propagation 2017-11-14 10:16:30 +01:00
Kubernetes Submit Queue ea66c00522
Merge pull request #54509 from vmware/node_poweroff_test
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

E2E test to verify pod failover during node power-off

**What this PR does / why we need it**:

This PR adds test to verify volume status after the node where the pod got provisioned being powered off and failed over to a different node.

Test performs following tasks:

1. Create a StorageClass
2. Create a PVC with the StorageClass
3. Create a Deployment with 1 replica, using the PVC
4. Verify the pod got provisioned on a node
5. Verify the volume is attached to the node
6. Power off the node where pod got provisioned
7. Verify the pod got provisioned on a different node
8. Verify the volume is attached to the new node
9. Verify the volume is detached from the previous node
10. Power on the previous node
11. Delete the Deployment
12. Delete the PVC
13. Delete the StorageClass

**Which issue this PR fixes**:

Fixes https://github.com/vmware/kubernetes/issues/272

**Special notes for your reviewer**:

Test logs:
```
# go run hack/e2e.go --check-version-skew=false --v --test --test_args='--ginkgo.focus=Node\sPoweroff'
flag provided but not defined: -check-version-skew
Usage of /tmp/go-build212295472/command-line-arguments/_obj/exe/e2e:
  -get
                go get -u kubetest if old or not installed (default true)
  -old duration
                Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/24 11:48:28 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/24 11:48:28 e2e.go:56:   Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/24 11:48:28 e2e.go:57:   The separator is required to use --get or --old flags
2017/10/24 11:48:28 e2e.go:58:   The -- flag separator also suppresses this message
2017/10/24 11:48:28 e2e.go:77: Calling kubetest --check-version-skew=false --v --test --test_args=--ginkgo.focus=Node\sPoweroff...
2017/10/24 11:48:28 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/24 11:48:28 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 350.700421ms
2017/10/24 11:48:28 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1627+54fc02df4a3a2a", GitCommit:"54fc02df4a3a2a12e14fb72d84a1aaa658ba6689", GitTreeState:"clean", BuildDate:"2017-10-24T18:33:37Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1437+ba66fcb63de9e9", GitCommit:"ba66fcb63de9e9b72e2ccf8b823df33a22df0522", GitTreeState:"clean", BuildDate:"2017-10-20T07:16:05Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}
2017/10/24 11:48:28 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 315.334518ms
2017/10/24 11:48:28 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=Node\sPoweroff
Conformance test: not doing test setup.
Oct 24 11:48:30.391: INFO: Overriding default scale value of zero to 1
Oct 24 11:48:30.391: INFO: Overriding default milliseconds value of zero to 5000
I1024 11:48:30.637436     409 e2e.go:378] Starting e2e run "ed9fdfc7-b8eb-11e7-a595-0050569c26b8" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508870909 - Will randomize all specs
Will run 1 of 717 specs
 
Oct 24 11:48:30.678: INFO: >>> kubeConfig: /root/.kube/config
Oct 24 11:48:30.685: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 24 11:48:30.719: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 24 11:48:30.857: INFO: 17 / 17 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 24 11:48:30.857: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 24 11:48:30.863: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 24 11:48:30.863: INFO: Dumping network health container logs from all nodes...
Oct 24 11:48:30.877: INFO: Client version: v1.9.0-alpha.1.1627+54fc02df4a3a2a
Oct 24 11:48:30.879: INFO: Server version: v1.9.0-alpha.1.1437+ba66fcb63de9e9
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
  verify volume status after node power off
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:149
[BeforeEach] [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 24 11:48:30.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:64
Oct 24 11:48:30.984: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
[It] verify volume status after node power off
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:149
STEP: Creating a Storage Class
STEP: Creating PVC using the Storage Class
STEP: Waiting for PVC to be in bound phase
Oct 24 11:48:31.141: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-zxz56 to have phase Bound
Oct 24 11:48:31.150: INFO: PersistentVolumeClaim pvc-zxz56 found but phase is Pending instead of Bound.
Oct 24 11:48:33.155: INFO: PersistentVolumeClaim pvc-zxz56 found and phase=Bound (2.013403698s)
STEP: Creating a Deployment
I1024 11:48:33.180161     409 deployment_util.go:254] Waiting deployment "deployment-ef6b820e-b8eb-11e7-a595-0050569c26b8" to complete
Oct 24 11:48:33.192: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1beta1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Oct 24 11:48:35.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:37.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:39.196: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:41.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:43.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:45.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:47.198: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:49.198: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:51.196: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:53.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
STEP: Get pod from the deployement
STEP: Verify disk is attached to the node: kubernetes-node5
STEP: Power off the node: kubernetes-node5
Oct 24 11:49:07.337: INFO: Waiting for pod to be failed over from "kubernetes-node5"
Oct 24 11:49:17.336: INFO: Waiting for pod to be failed over from "kubernetes-node5"
Oct 24 11:49:27.340: INFO: Waiting for pod to be failed over from "kubernetes-node5"
Oct 24 11:49:37.340: INFO: The pod has been failed over from "kubernetes-node5" to "kubernetes-node7"
STEP: Waiting for disk to be attached to the new node: kubernetes-node7
Oct 24 11:49:47.534: INFO: Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" has successfully attached to "kubernetes-node7".
STEP: Waiting for disk to be detached from the previous node: kubernetes-node5
Oct 24 11:49:57.707: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:07.702: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:17.710: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:27.733: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:37.713: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:47.723: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:57.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:07.710: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:17.719: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:27.716: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:37.717: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:47.712: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:57.707: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:07.724: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:17.716: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:27.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:37.716: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:47.709: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:57.714: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:07.715: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:17.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:27.714: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:37.713: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:47.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:57.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:07.712: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:17.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:27.712: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:37.707: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:47.698: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:57.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:07.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:17.699: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:27.702: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:37.704: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:47.703: INFO: Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" has successfully detached from "kubernetes-node5".
STEP: Power on the previous node: kubernetes-node5
Oct 24 11:55:49.168: INFO: Deleting PersistentVolumeClaim "pvc-zxz56"
[AfterEach] [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 24 11:55:49.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-node-poweroff-l245b" for this suite.
Oct 24 11:55:57.630: INFO: namespace: e2e-tests-node-poweroff-l245b, resource: bindings, ignored listing per whitelist
Oct 24 11:55:57.643: INFO: namespace e2e-tests-node-poweroff-l245b deletion completed in 8.379395732s
 
• [SLOW TEST:446.758 seconds]
[sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
  verify volume status after node power off
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:149
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 24 11:55:57.647: INFO: Running AfterSuite actions on all node
Oct 24 11:55:57.647: INFO: Running AfterSuite actions on node 1
 
Ran 1 of 717 Specs in 446.969 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 716 Skipped PASS
 
Ginkgo ran 1 suite in 7m27.797177022s
Test Suite Passed
2017/10/24 11:55:57 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=Node\sPoweroff' finished in 7m28.760818768s
2017/10/24 11:55:57 e2e.go:81: Done
```
VMware Reviewers: @divyenpatel @pshahzeb 

**Release note**:

```release-note
NONE
```
2017-11-14 00:56:26 -08:00
Kubernetes Submit Queue c1cd70ad16
Merge pull request #55533 from janetkuo/hook-e2e-multi
Automatic merge from submit-queue (batch tested with PRs 55009, 55532, 55601, 52569, 55533). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Webhook e2e test: PUT and PATCH operations

**What this PR does / why we need it**:

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Ref: https://github.com/kubernetes/features/issues/492

**Special notes for your reviewer**: ~depends on #55127~ (merged)
@kubernetes/sig-api-machinery-api-reviews 

**Release note**:

```release-note
NONE
```
2017-11-14 00:10:01 -08:00
Kubernetes Submit Queue 3479549a62
Merge pull request #55532 from ianchakeres/validate-greater-than-zero-pv-pvc
Automatic merge from submit-queue (batch tested with PRs 55009, 55532, 55601, 52569, 55533). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Validate that PV capacity and PVC capacity requests are positive, greater than 0

**What this PR does / why we need it**:  Zero (0) capacity PVs cause related pods to fail, and zero (0) capacity PVCs create zero (0) capacity PVs.

**Which issue(s) this PR fixes** :
Fixes #55553

**Special notes for your reviewer**:

**Release note**:

```release-note
Validate positive capacity for PVs and PVCs.
```
2017-11-14 00:09:48 -08:00
Kubernetes Submit Queue 51c8e9294b
Merge pull request #55009 from bradtopol/addhosteventsemptyconform2
Automatic merge from submit-queue (batch tested with PRs 55009, 55532, 55601, 52569, 55533). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add empty dir and host related conformance annotations

Signed-off-by: Brad Topol <btopol@us.ibm.com>

Add empty dir and host related conformance annotations

/sig testing
/area conformance
@sig-testing-pr-reviews

This PR adds pod related conformance annotations to the e2e test suite.

The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the empty dir and host based e2e conformance tests.

Special notes for your reviewer:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.



**Release note**:

```release-note NONE

```
2017-11-14 00:09:45 -08:00
Shaomin Chen 3db4f2b843 E2E test to verify pod failover during node power-off 2017-11-13 21:52:54 -08:00
Kubernetes Submit Queue 710523ed7d
Merge pull request #53541 from jiayingz/e2e-stats
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Extend test/e2e/scheduling/nvidia-gpus.go to track resource usage of

installer and device plugin containers.
To support this, exports certain functions and fields in
framework/resource_usage_gatherer.go so that it can be used in any
e2e test to track any specified pod resource usage with the specified
probe interval and duration.



**What this PR does / why we need it**:
We need to quantify the resource usage of the device plugin DaemonSet to make sure it can run reliably on nodes with GPUs.
We also want to measure gpu driver installer resource usage to track any unexpected resource consumption during driver installation.
For the later part, see a related issue https://github.com/kubernetes/features/issues/368.

Example resource summary output:
Oct  6 12:35:07.289: INFO: Printing summary: ResourceUsageSummary
Oct  6 12:35:07.289: INFO: ResourceUsageSummary JSON
{
  "100": [
    {
      "Name": "nvidia-device-plugin-6kqxp/nvidia-device-plugin",
      "Cpu": 0.000507167,
      "Mem": 2134016
    },
    {
      "Name": "nvidia-device-plugin-6kqxp/nvidia-driver-installer",
      "Cpu": 1.915508718,
      "Mem": 663330816
    },
    {
      "Name": "nvidia-device-plugin-l28zc/nvidia-device-plugin",
      "Cpu": 0.000836256,
      "Mem": 2211840
    },
    {
      "Name": "nvidia-device-plugin-l28zc/nvidia-driver-installer",
      "Cpu": 1.916886293,
      "Mem": 691449856
    },
    {
      "Name": "nvidia-device-plugin-xb4vh/nvidia-device-plugin",
      "Cpu": 0.000515103,
      "Mem": 2265088
    },
    {
      "Name": "nvidia-device-plugin-xb4vh/nvidia-driver-installer",
      "Cpu": 1.909435982,
      "Mem": 832430080
    }
  ],
  "50": [
    {
...

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-11-13 21:51:16 -08:00
xiangpengzhao eddd9a208f Update conformance testdata for downward api test 2017-11-14 09:53:31 +08:00
xiangpengzhao 4ac61e1d12 Combine downward api e2e test cases. 2017-11-14 09:51:35 +08:00
Janet Kuo 7ffaa06ab3 Webhook e2e test: PUT and PATCH operations 2017-11-13 16:50:51 -08:00
Kubernetes Submit Queue cba5aa0590
Merge pull request #55127 from caesarxuchao/webhook-do-conversion
Automatic merge from submit-queue (batch tested with PRs 54005, 55127, 53850, 55486, 53440). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Validation webhook plugin converts objects to the external version before sending to webhooks

**What this PR does / why we need it**:


**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:

https://github.com/kubernetes/features/issues/492

**Special notes for your reviewer**:

**Release note**:

```release-note
The apiserver sends external versioned object to the admission webhooks now. Please update the webhooks to expect admissionReview.spec.object.raw to be serialized external versions of objects. 
```
2017-11-13 16:45:22 -08:00
Jiaying Zhang ae36f8ee95 Extend test/e2e/scheduling/nvidia-gpus.go to track resource usage of
installer and device plugin containers.
To support this, exports certain functions and fields in
framework/resource_usage_gatherer.go so that it can be used in any
e2e test to track any specified pod resource usage with the specified
probe interval and duration.
2017-11-13 16:24:41 -08:00
Kubernetes Submit Queue beefab8a8e
Merge pull request #54825 from bradtopol/adddownwarddockerconf
Automatic merge from submit-queue (batch tested with PRs 54826, 53576, 55591, 54946, 54825). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add downward api and docker container conformance annotations

Signed-off-by: Brad Topol <btopol@us.ibm.com>
Add downward api and docker container conformance annotations

/sig testing
/area conformance
@sig-testing-pr-reviews

This PR adds downward api and docker container related conformance annotations to the e2e test suite.

The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the downward api and docker container based e2e conformance tests.

Special notes for your reviewer:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.
**Release note**:

```release-note NONE
```
2017-11-13 14:47:08 -08:00
Kubernetes Submit Queue 6e2e5bac40
Merge pull request #54946 from bradtopol/adddnscrdcmprobeconform
Automatic merge from submit-queue (batch tested with PRs 54826, 53576, 55591, 54946, 54825). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add dns, configmap, and custom resource definition conformance

annotations.

Signed-off-by: Brad Topol <btopol@us.ibm.com>
Add dns, configmap, and custom resource definition related conformance annotations

/sig testing
/area conformance
@sig-testing-pr-reviews

This PR adds pod related conformance annotations to the e2e test suite.

The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the dns, configmap, and custom resource definition based e2e conformance tests.
Special notes for your reviewer:

Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.




**Release note**:

```release-note NONE

```
2017-11-13 14:47:05 -08:00
Chao Xu ab053a224d let validation webhook convert objects to the external version before sending them 2017-11-13 12:55:33 -08:00
Kubernetes Submit Queue 74ec8d0fe8
Merge pull request #55288 from Random-Liu/e2e-log-for-alternative-runtime
Automatic merge from submit-queue (batch tested with PRs 55283, 55461, 55288, 53970, 55487). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Support collecting log for alternative container runtime in e2e test.

Fixes https://github.com/kubernetes/kubernetes/issues/55629.

Add support to collect logs for alternative container runtime in e2e.
Example for `cri-containerd`:
```
$ go run hack/e2e.go -- --test -v --test_args="--report-dir=$PWD --container-runtime-services=cri-containerd,containerd,cri-containerd-installation"
```

```release-note
none
```

/cc @kubernetes/sig-node-pr-reviews @kubernetes/sig-testing-pr-reviews
2017-11-13 12:32:24 -08:00
Ian Chakeres 98e2c8cdee Validate that PV capacity and PVC capacity requests are greater than zero 2017-11-13 08:57:01 -08:00
Beata Skiba 3431411e79 Regional support in CA tests.
When calling GKE API andd gcloud, take into account
that clusters can be regional.
This currently uses MultiZonal as an indicator that
cluster is regional, which is suboptimal, but considering
that our tests do not work with multizonal clusters
at the moment, there is no regression.
This should be changed once there is an indicator available
that the cluster is regional.
2017-11-13 16:06:41 +01:00
Kubernetes Submit Queue 52e712913d
Merge pull request #55478 from kawych/e2e
Automatic merge from submit-queue (batch tested with PRs 55594, 47849, 54692, 55478, 54133). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Use HPA permissions to read custom metrics in Custom Metrics e2e test

**What this PR does / why we need it**:
This PR fixes e2e test for Stackdriver Custom Metrics on GKE. With PR: https://github.com/kubernetes/kubernetes/pull/55387 it will be also necessary for analogous test on GCE.

**Release note**:
```release-note
NONE
```
2017-11-13 06:09:27 -08:00
Kubernetes Submit Queue fd3de96be6
Merge pull request #55594 from krzysztof-jastrzebski/e2e6
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix typo in e2e test name.
2017-11-13 05:20:34 -08:00
Kubernetes Submit Queue 41fe3ed5bc
Merge pull request #54405 from resouer/clean-docker-dep
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

[Part 1] Remove docker dep in kubelet startup

**What this PR does / why we need it**:

Remove dependency of docker during kubelet start up.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: 

Part 1 of #54090 

**Special notes for your reviewer**:
Changes include:

1. Move docker client initialization into dockershim pkg.
2. Pass a docker `ClientConfig` from kubelet to dockershim
3. Pass parameters needed by `FakeDockerClient` thru `ClientConfig` to dockershim

(TODO, the second part) Make dockershim tolerate when dockerd is down, otherwise it will still fail kubelet

Please note after this PR, kubelet will still fail if dockerd is down, this will be fixed in the subsequent PR by making dockershim tolerate dockerd failure (initializing docker client in a separate goroutine), and refactoring cgroup and log driver detection. 

**Release note**:

```release-note
Remove docker dependency during kubelet start up 
```
2017-11-13 03:59:53 -08:00
Krzysztof Jastrzebski ee5e6d85de Fix typo in e2e test name. 2017-11-13 10:06:35 +01:00