Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Version bump to etcd v3.2.11, grpc v1.7.5
Fix https://github.com/kubernetes/kubernetes/issues/56114: Update to etcd client 3.2.11
Version bumps:
- etcd from 3.1.10 to 3.2.11
- grpc from 1.3.0 to 1.7.5
- grpc-gateway from v1.1.0-25-g84398b9 to v1.3.0
TODO:
- [x] Apply etcd [3.2 client upgrade guide](https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_2.md)
- [x] Apply grpc API changes in 1.6.0 and 1.7.0 [release notes](https://github.com/grpc/grpc-go/releases)
- [x] bbolt was pulled in transitively, why? We have tests that embed etcd, so we must vendor the etcd server and all it's dependencies.
- [x] Upgrade to containerd v1.0.0? Currently kubernetes depends on containerd v1.0.0-beta.2-159-g27d450a0 which depends on grpc v1.3.0, but containerd v1.0.0 depends on grpc 1.7.2. Not needed. The containerd grpc upgrade required [no code changes](ce3e32680d).
- [x] Fix all failing tests
- [x] Ensure we can safely upgrade grpc to 1.7.5 given that docker and cAdvisor still depend on grpc 1.3.0 (both in the versions we vend and on master for both projects). Should we hold off on this change until we have a docker release that uses gprc 1.7.x?
- [x] Wait for grpc 1.7.5 to be released (it will include https://github.com/grpc/grpc-go/pull/1747). Once released, bump grpc version in this PR and remove workarounds in `hack/godep-save.sh`.
Transitive dependencies on grpc:
- docker depends on grpc, but according to the package dependency graph (`go list -f '{{ .Deps }}'`) there are no dependencies from kubernetes to grpc via docker packages.
- containerd v1.0.0 depends on grpc 1.7.2, we should upgrade to containerd v1.0.0 soon, this can be done in a separate PR
- cadvisor depends on grpc 1.3.0 on master, it should upgrade it to grpc 1.7.5, this can be done in a separate PR
**Release note**:
```release-note
Upgrade to etcd client 3.2.11 and grpc 1.7.5 to improve HA etcd cluster stability.
```
The linux headers take significant disk space and are not necessary to
run kubernetes on a GKE node. User logging on to a node can trivially
install the kernel headers should they need to by running "apt-get install
linux-headers-$(uname -r)".
Signed-off-by: Chris Glass <chris.glass@canonical.com>
The minimal Ubuntu image used on GKE nodes provides the vim editor as
part of system packages, as "vim.tiny". People logging on the nodes have a vim
environment available despite the "vim" package not being installed.
Signed-off-by: Chris Glass <chris.glass@canonical.com>
Automatic merge from submit-queue (batch tested with PRs 55475, 57155, 57260, 57222). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix e2e local test
**What this PR does / why we need it**:
Fix some issue on local e2e_node test: `Can't start e2e service "kubelet"`
**Which issue(s) this PR fixes**:
Fixes https://github.com/kubernetes/kubernetes/issues/54622
**Special notes for your reviewer**:
**Release note**:
```release-note
```
/sig node
This e2e test tests the CSI volume plugin in kubernetes with
a CSI hostPath driver. It is also setup to be able to be
tested with more drivers in the future.
Automatic merge from submit-queue (batch tested with PRs 56108, 56811, 57335, 57331, 54530). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
E2E autoscaling test scenario "should add node to the particular mig" fix
Resize targeted MIG to one to fix flakes when initial size of it is zero. Scale-up from zero doesn't make sense in this scenario, and the test assumes there will be always at least one node.
Automatic merge from submit-queue (batch tested with PRs 56108, 56811, 57335, 57331, 54530). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix Stackdriver Logging e2e tests
This PR fixes the problems in the SD Logging tests:
- Because of Docker problem, trimming works unpredictably on some OSs, so instead of assuming exact trimming mechanism, just assume trimming
- Increases the throughput of reading logs through Pub/Sub by increasing the parallelism level
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
e2e_node: small fixes to setup_host.sh for Ubuntu Trusty
**What this PR does / why we need it**:
Two small fixes for how nsenter is built from source and installed on Ubuntu Trusty:
1. Use mktemp for the creating the build directory instead of a hard coded name
2. Use current (2.31) util-linux instead of 3+ year old version
**Which issue(s) this PR fixes**:
Fixes#56106
**Special notes for your reviewer**:
See https://github.com/kubernetes/kubernetes/issues/56106 for some thoughts on other ways to address this. My patch for util-linux 2.31 may just be a band-aid?
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54379, 56593, 56685, 54174, 57309). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Use k8s.gcr.io vanity domain for container images
Related issue: https://github.com/kubernetes/release/issues/281
```release-note
Use "k8s.gcr.io" for container images rather than "gcr.io/google_containers". This is just a redirect, for now, so should not impact anyone materially.
Documentation and tools should all convert to the new name. Users should take note of this in case they see this new name in the system.
```
The e2e_node test environment setup script is hard coded to pull down a
quite old version of util-linux in order to build nsenter on trusty,
which sadly is well known to not include an nsenter executable.
While "just a test", it's unfortunate to be building from really old
util-linux sources when newer are available.
Signed-off-by: Tim Pepper <tpepper@vmware.com>
There's a bit of a hack in place to insure nsenter is present on Ubuntu
trusty, which doesn't otherwise include it. This downloads util-linux
to a hard coded directory in /tmp which is a bad practice. Even though
"this is just a test case" it should properly use mktemp.
Signed-off-by: Tim Pepper <tpepper@vmware.com>
Automatic merge from submit-queue (batch tested with PRs 57127, 57011, 56754, 56601, 56483). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove hacks added for mesos
**What this PR does / why we need it**:
Since Mesos is no longer in your main repository and since we have
things like dynamic kubelet configuration in progress, we should
drop these undocumented, untested, private hooks.
cmd/kubelet/app/server.go::CreateAPIServerClientConfig
CreateAPIServerClientConfig::getRuntime
pkg/kubelet/kubelet_pods.go::getPhase
Also remove stuff from Dependencies struct that were specific to
the Mesos integration (ContainerRuntimeOptions and Options)
Also remove stale references in test/e2e and and test owners file
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Drop hacks used for Mesos integration that was already removed from main kubernetes repository
```
Automatic merge from submit-queue (batch tested with PRs 56386, 57204, 55692, 57107, 57177). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Check ns setup error during e2e
If namespace create fails during e2e setup, don't panic
/assign @tallclair
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 56676, 57050, 54881, 56822, 57113). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add e2e test for when a webhook does not return.
**What this PR does / why we need it**: Adding an e2e test to simulate a webhook not returning. Making sure this does not just hang the API.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Ref https://github.com/kubernetes/features/issues/492
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 56828, 55184, 56849, 57081, 56654). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Use rbac client with explicit version
**What this PR does / why we need it**:
Rbac client without explicit version has been deprecated, change them to the one with explicit version.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes partially #55993
**Special notes for your reviewer**:
/cc @caesarxuchao @sttts
This is the cleanup for the last client. Hope there is detailed plan to remove these deprecated clients.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 56858, 57040, 56979, 57051, 57052). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add e2e test for custom metrics with Prometheus and Stackdriver
**What this PR does / why we need it**:
Add e2e test for custom metrics in a setup with a container exposing metrics in Prometheus format and prometheus-to-sd sidecar container scraping the metrics and writing them to Stackdriver.
Also some test refactoring for adding new tests more easily (plan to add scale down tests and a test with scaling on two metrics).
**Release note**:
```
NONE
```
Automatic merge from submit-queue (batch tested with PRs 56894, 56940, 55223, 56554, 54985). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Convert scheduler_perf tests to use subtest.
Combine four separate tests into a table-driven test that uses subtests
to logically organize the tests.
**What this PR does / why we need it**:
Simplifies integration test code in test/integration/scheduler_perf.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
/sig scheduling
Automatic merge from submit-queue (batch tested with PRs 56250, 56809, 56812, 56792, 56724). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Hit ILB endpoint in e2e test
**What this PR does / why we need it**:
During the ILB e2e networking test, a pod is created and curls the ILB's IP address, validating that the load balancer correctly routes traffic to the K8s service. Test will also hit the external LB endpoint after switching from ILB->ELB, validating no GCE conversion issues going from ILB to ELB.
This does not validate that the load balancer is accessible externally to the cluster (but on the same network).
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 56250, 56809, 56812, 56792, 56724). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Extend node e2e test suite with containerized Kubelet
How to run it?
```sh
make test-e2e-node \
TEST_ARGS='--kubelet-containerized=true --hyperkube-image=hyperkube-amd64:1.9 --kubelet-flags="<FLAGS>"' \
FOCUS="Conformance"
```
Automatic merge from submit-queue (batch tested with PRs 56410, 56707, 56661, 54998, 56722). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix typo in test comment.
**What this PR does / why we need it**:
Found a typo while reading scheduler tests.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
/sig scheduling
Automatic merge from submit-queue (batch tested with PRs 56410, 56707, 56661, 54998, 56722). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Move some kubelet constants to a common place
**What this PR does / why we need it**:
More context, see: https://github.com/kubernetes/kubernetes/issues/56516
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#56516
[thanks @ixdy for verifying this!]
**Special notes for your reviewer**:
@ixdy how can I verify #56516 against this locally?
/cc @ixdy @mtaufen
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Log actual return code, not the default value.
**What this PR does / why we need it**:
Previously the code would log 127, no matter what, which is quite
useless..
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 56413, 56322, 56490, 56460, 56487). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Use policy client with explicit version
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** :
Fixes partially #55993
**Special notes for your reviewer**:
/cc @caesarxuchao @sttts
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 57211, 56150, 56368, 56271, 55957). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Process cluster-scoped owners correctly
Rework of https://github.com/kubernetes/kubernetes/pull/54943Fixes#54940
Uses correct scope info from the restmapper at point of object lookup.
```release-note
Fixed a garbage collection race condition where objects with ownerRefs pointing to cluster-scoped objects could be deleted incorrectly.
```
Automatic merge from submit-queue (batch tested with PRs 56217, 56268, 56263, 56328, 56200). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Use batch client with explicit version
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** :
Fixes partially #55993
**Special notes for your reviewer**:
/cc @caesarxuchao @sttts
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55900, 55995, 55913, 55467, 55376). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Use extensions client with explicit version
**What this PR does / why we need it**:
Extensions client without explicit version has been deprecated, change them to the one with explicit version.
**Which issue(s) this PR fixes**:
Fixes partially #55993
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55954, 56037, 55866, 55984, 54994). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
kubelet summary api test updates
**What this PR does / why we need it**
Fixes https://github.com/kubernetes/kubernetes/issues/55985
Improve the accuracy of the test as follows:
- ensure memory bound checks for unconstrained group are limited by actual node capacity
- grow the fs capacity bounds so we can run on larger drives (i.e. my dev laptop)
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55954, 56037, 55866, 55984, 54994). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Create sig-autoscaling-maintainers alias
Create an alias group for sig-autoscaling.
This will be used by test/e2e/autoscaling/OWNERS.
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54604, 55781, 55806, 55935, 55991). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Use core client with explicit version
**What this PR does / why we need it**:
Core client without explicit version has been deprecated, change them to the one with explicit version.
**Which issue(s) this PR fixes**:
Fixes partially #55993
**Special notes for your reviewer**:
/cc @kubernetes/sig-api-machinery-pr-reviews
/cc @caesarxuchao @k82cn @sttts @kevin-wangzefeng
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix conformance testdata OWNERS file.
**What this PR does / why we need it**:
Fixes OWNERS file parse error.
**Release note**:
```release-note
none
```
/cc @mml
We shouldn't upgrade etcd without first warning the user that some etcd
version transitions can't be undone. We don't know what version the user
currently has, so we require either an explicit version and image, or an
interactive acknowledgement of this caveat.
This is modeled after the STORAGE_MEDIA_TYPE prompt just above.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Make sure node is ready before calling getLocalNode to fix test failure.
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes https://github.com/kubernetes/kubernetes/issues/56518
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix GKE system spec for OS images with kernel version 4.10+
Two changes are required for validating images with kernel version 4.10+.
1. Update the spec in gke.yaml to allow 4.10+ kernel.
2. Update the GKE environment docker validation to allow `CONFIG_DEVPTS_MULTIPLE_INSTANCES` to be missing in >= 4.8 kernel because this option has been removed in 4.8.
**Release note**:
```
None
```
/assign @dchen1107
Automatic merge from submit-queue (batch tested with PRs 56785, 56709). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Enable SD custom metric autoscaling e2e on GKE
This test should now be able to run on GKE, so enabling it there.
Automatic merge from submit-queue (batch tested with PRs 52748, 56623). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add brackets around IPv6 addrs in e2e test IP:port endpoints
There are several locations in the e2e tests where endpoints of the
form IPv6:port use IPv6 addresses directly, without surrounding brackets.
Brackets are required around IPv6 addresses in this case, in order to
distinguish the colons in the IPv6 address from the colon immediately
preceding the port.
Also, wherever the curl command might be used with an IPv6 address
surrounded in brackets, the "-g" argument is added to the curl
command line arguments so that the brackets can be interpreted
correctly.
fixes#52746
**What this PR does / why we need it**:
This PR adds brackets around IPv6 addresses when they appear as part of an IPv6-addr:port endpoint
in the e2e tests. This is needed because any connections that attempt to use IPv6-addr:port
endpoint without brackets surrounding the IPv6-addr will fail.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52746
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55360, 56444, 56687, 56791, 56802). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add DisabledForLargeClusters tag to audit tests.
Remove this tag once functionality from feature request #53455 is implemented.
Fixes#56683.
```release-note
NONE
```
Since Mesos is no longer in your main repository and since we have
things like dynamic kubelet configuration in progress, we should
drop these undocumented, untested, private hooks.
cmd/kubelet/app/server.go::CreateAPIServerClientConfig
CreateAPIServerClientConfig::getRuntime
pkg/kubelet/kubelet_pods.go::getPhase
Also remove stuff from Dependencies struct that were specific to
the Mesos integration (ContainerRuntimeOptions and Options)
Also remove stale references in test/e2e and and test owners file
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Update Dashboard addon to version 1.8.0 and align /ui redirect with it
**What this PR does / why we need it**: In Dashboard 1.8.0 we have introduced a couple of changes (security, settings, new resources etc.) and fixed a lot of bugs. You can check release notes at https://github.com/kubernetes/dashboard/releases/tag/v1.8.0.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Updated Dashboard add-on to version 1.8.0.
- The Dashboard add-on now deploys with https enabled
- The Dashboard can be accessed via kubectl proxy at http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
- The /ui redirect is deprecated and will be removed in 1.10
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix CreateVolume func: use search mode instead
**What this PR does / why we need it**:
This is a little fall back for CreateVolume func: use search mode for Dedicated kind as @rootfs suggested.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52396
**Special notes for your reviewer**:
I reference the implmentation of v1.6 in the same CreateVolume func
https://github.com/kubernetes/kubernetes/blob/release-1.6/pkg/cloudprovider/providers/azure/azure_storage.go#L213-L247
**Release note**:
```
fix azure storage account exhausting issue by using azure disk mount
```
/sig azure
@rootfs @feiskyer @karataliu
Automatic merge from submit-queue (batch tested with PRs 56497, 56500, 55018, 56544, 56425). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
e2e: eviction test redirect dd stderr
**What this PR does / why we need it**: Redirects `dd` stderr to /dev/null
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#56234
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix network value for stats summary for multiple network interfaces
This PR is part of [Heapster #1788](https://github.com/kubernetes/heapster/pull/1788).
The original reason is when there are more than one none `lo`, `docker0`, `veth` network interfaces instead of just one `eth0`, the network interface value is only partial and does not correct. For now, summary stats api only gets the eth0 network interface values.
The original issues about this can be find in [Heapster #1058](https://github.com/kubernetes/heapster/issues/1058) and [Cadvisor #1593](https://github.com/google/cadvisor/issues/1593).
```release-note
Fix stats summary network value when multiple network interfaces are available.
```
/cc @DirectXMan12 @piosz @xiangpengzhao @vishh @timstclair
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add [sig-node] to some unowned e2e_node tests
Follow the SIGDescribe pattern used in test/e2e/foo tests
ref #49161
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 56356, 56435). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix AppArmor upgrade test
https://github.com/kubernetes/kubernetes/pull/52395 Introduced the `unconfined` AppArmor profile, but this feature should not be tested in the upgrade test since it doesn't exist prior to 1.9 (so the test always fails when checking it prior to the upgrade).
Fixes#56422
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 56094, 52910, 55953, 56405, 56415). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Include ServerName in tls transport cache key
Fixes#56385
```release-note
Fixes server name verification of aggregated API servers and webhook admission endpoints
```
The mounttest container should be running until the test determines to
give up (i.e., time out) and kill it. It should not exit prematurely by
itself. Bump the `--retry-timeout` to a much higher value.
Add a smoke test that checks for the existence of some representative
paths from the apiextensions-server and the kube-aggregator server,
both part of the delegation chain in kube-apiserver.
Automatic merge from submit-queue (batch tested with PRs 55952, 49112, 55450, 56178, 56151). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add PodDisruptionBudget support in pod preemption
**What this PR does / why we need it**:
This PR adds the logic to make scheduler preemption aware of PodDisruptionBudget. Preemption tries to avoid preempting pods whose PDBs are violated by preemption. If preemption does not find any other pods to preempt, it will preempt pods despite violating their PDBs.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#53913
**Special notes for your reviewer**:
**Release note**:
```release-note
Add PodDisruptionBudget support during pod preemption
```
ref/ #47604
/sig scheduling
Automatic merge from submit-queue (batch tested with PRs 55952, 49112, 55450, 56178, 56151). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
New API group for Events.
Fixkubernetes/features#383
cc @shyamjvs
```release-note
Add events.k8s.io api group with v1beta1 API containing redesigned Event type.
```
Automatic merge from submit-queue (batch tested with PRs 52767, 55065, 55148, 56228, 56221). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add Feature tag to CPU Manager node e2e test.
**What this PR does / why we need it**: Adds a `Feature` tag to the CPU manager node e2e tests.
CC @ConnorDoyle
Automatic merge from submit-queue (batch tested with PRs 52767, 55065, 55148, 56228, 56221). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
change DefaultGarbageCollectionPolicy to DeleteDependents for workloads controllers
**What this PR does / why we need it**:
As part of the apps/v1 GA effort (kubernetes/features#353) for v1.9. For core controllers, like `Deployment`, `DaemonSet`, `ReplicaSet`, and `StatefulSet`, changing the `DefaultGarbageCollectionPolicy` from `OrphanDependents` to `DeleteDependents` will make these objects consistent with the default behavior for all new objects.
For legacy API versions, the `DefaultGarbageCollectionPolicy` remains `OrphanDependents`.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
ref #55027
**Special notes for your reviewer**:
/cc @enisoc @caesarxuchao @kow3ns
/assign @kubernetes/sig-apps-api-reviews
**Release note**:
```release-note
The default garbage collection policy for Deployment, DaemonSet, StatefulSet, and ReplicaSet has changed from OrphanDependents to DeleteDependents when the deletion is requested through an `apps/v1` endpoint. Clients using older endpoints will be unaffected. This change is only at the REST API level and is independent of the default behavior of particular clients (e.g. this does not affect the default for the kubectl `--cascade` flag).
If you upgrade your client-go libs and use the `AppsV1()` interface, please note that the default garbage collection behavior is changed.
```
Automatic merge from submit-queue (batch tested with PRs 52767, 55065, 55148, 56228, 56221). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
LimitRange e2e test improved.
**What this PR does / why we need it**: Improves the e2e test for LimitRange API by testing Update, Delete and Watch features.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Combine downward api e2e test cases.
**What this PR does / why we need it**:
This will reduce one loop of create/delete pod. Hope it can save some test time.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55545, 55548, 55815, 56136, 56185). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Extend apiserver testserver
**What this PR does / why we need it**:
Extend the test pkg of api-server a little bit so that it can be used by others
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
We wanted to use this test pkg while creating a new integration test fixture, the idea of starting the api-server without actually starting a proper etcd (integration etcd ) is very useful, if would be great to extend this for other usecases such as `insecure` mode too.
It will also be useful to get the working `tmpdir` returned so that we could store additional certs (such as service account public and private keys and kubeconf for this apiserver) if needed, which will be cleaned up automatically by the `teardownFn` when the api server is terminating.
**Release note**:
```release-note
NONE
```
ref: https://github.com/kubernetes/kubernetes/pull/50144
Automatic merge from submit-queue (batch tested with PRs 51321, 55969, 55039, 56183, 55976). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
use local `diskNames` slice to fix issue 56041
Fixes#56041
```release-note
NONE
```
@msau42
Automatic merge from submit-queue (batch tested with PRs 51321, 55969, 55039, 56183, 55976). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Topology aware volume scheduler and PV controller changes
**What this PR does / why we need it**:
Scheduler and PV controller changes to support volume topology aware scheduling, as specified in kubernetes/community#1168
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#54435
**Special notes for your reviewer**:
* I've split the PR into logical commits to make it easier to review
* The remaining TODOs I plan to address next release unless you think it needs to be done now
**Release note**:
```release-note
Adds alpha support for volume scheduling, which allows the scheduler to make PersistentVolume binding decisions while respecting the Pod's scheduling requirements. Dynamic provisioning is not supported with this feature yet.
Action required for existing users of the LocalPersistentVolumes alpha feature:
* The VolumeScheduling feature gate also has to be enabled on kube-scheduler and kube-controller-manager.
* The NoVolumeNodeConflict predicate has been removed. For non-default schedulers, update your scheduler policy.
* The CheckVolumeBinding predicate has to be enabled in non-default schedulers.
```
@kubernetes/sig-storage-pr-reviews @kubernetes/sig-scheduling-pr-reviews
This PR adds the pod-level metrics for CPU and memory stats. cAdvisor
can get all pod cgroup information so we can add this pod-level CPU and
memory stats information from the corresponding pod cgroup
abstract out etcd server creation
test/integration/framework: cleanup master_utils.go
kube-apiserver: move StartTestServer tests into test/integration/master
Fix the failing scale test
kube-apiserver's TestServer now returns a struct instead of individual values
Automatic merge from submit-queue (batch tested with PRs 55103, 56036, 56186). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Removed opaque integer resources (deprecated in v1.8)
**What this PR does / why we need it**:
* Remove opaque integer resources (OIR) support from the code base. This feature was deprecated in v1.8 and replaced by Extended Resources (ER).
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#55102
**Release note**:
```release-note
Remove opaque integer resources (OIR) support (deprecated in v1.8.)
```
Automatic merge from submit-queue (batch tested with PRs 51494, 56097, 56072, 56175). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Kubelet flags take precedence over config from files/ConfigMaps
Changes the Kubelet configuration flag precedence order so that flags
take precedence over config from files/ConfigMaps.
See:
https://docs.google.com/document/d/18-MsChpTkrMGCSqAQN9QGgWuuFoK90SznBbwVkfZryo/
Also modifies e2e node test suite to transform all relevant Kubelet flags into
a config file before starting tests when the KubeletConfigFile feature gate is
true, and turns on the KubeletConfigFile gate for all e2e node tests.
This allows the alpha dynamic Kubelet config feature to continue to
work in tests after the precedence change.
fixes#56171
Related: https://github.com/kubernetes/features/issues/281
```release-note
CLI flags passed to the Kubelet now take precedence over Kubelet config files and dynamic Kubelet config. This helps ensure backwards compatible behavior across Kubelet binary updates.
```
Automatic merge from submit-queue (batch tested with PRs 55340, 55329, 56168, 56170, 56105). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adds device plugin allocation latency metric.
For #53497
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 55812, 55752, 55447, 55848, 50984). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add Pod-level local ephemeral storage metric in Summary API
This PR adds pod-level ephemeral storage metric into Summary API.
Pod-level ephemeral storage usage is the sum of all containers and local
ephemeral volume including EmptyDir (if not backed up by memory or
hugepages), configueMap, and downwardAPI.
Address issue #55978
**Release note**:
```release-note
Add pod-level local ephemeral storage metric in Summary API. Pod-level ephemeral storage reports the total filesystem usage for the containers and emptyDir volumes in the measured Pod.
```
Automatic merge from submit-queue (batch tested with PRs 56128, 56004, 56083, 55833, 56042). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Graduate the admission and admissionregistration (webhook part) API to v1beta1
ref: kubernetes/features#492
Most changes are mechanical. Please take a look at the commit message to see if the commit is worth reviewing.
```release-note
Action required:
The `admission/v1alpha1` API has graduated to `v1beta1`. Please delete your existing webhooks before upgrading the cluster, and update your admission webhooks to use the latest API, because the API has backwards incompatible changes.
The webhook registration related part of the `admissionregistration` API has graduated to `v1beta1`. Please delete your existing configurations before upgrading the cluster, and update your configuration file to use the latest API.
```
Automatic merge from submit-queue (batch tested with PRs 56128, 56004, 56083, 55833, 56042). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Don't call f.BeforeEach() again in ingress suite
**What this PR does / why we need it**: Calling f.BeforeEach() explicitly in ingress suite is causing test panics. See #56089.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes (hopefully) #56089
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Changes the Kubelet configuration flag precedence order so that flags
take precedence over config from files/ConfigMaps.
See issue #56171 for more details.
Also modifies e2e node test suite to transform all relevant Kubelet
flags into a config file before starting tests when the
KubeletConfigFile feature gate is true, and turns on the
KubeletConfigFile gate for all e2e node tests. This allows the alpha
dynamic Kubelet config feature to continue to work in tests after
the precedence change.
Automatic merge from submit-queue (batch tested with PRs 54316, 53400, 55933, 55786, 55794). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Improve messages around waiting for pods.
**What this PR does / why we need it**:
This is a step towards solving #55785
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54316, 53400, 55933, 55786, 55794). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add support to take nominated pods into account during scheduling to avoid starvation of higher priority pods
**What this PR does / why we need it**:
When a pod preempts lower priority pods, the preemptor gets a "nominated node name" annotation. We call such a pod a nominated pod. This PR adds the logic to take such nominated pods into account when scheduling other pods on the same node that the nominated pod is expected to run. This is needed to avoid starvation of preemptor pods. Otherwise, lower priority pods may fill up the space freed after preemption before the preemptor gets a chance to get scheduled.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#54501
**Special notes for your reviewer**: This PR is built on top of #55109 and includes all the changes there as well.
**Release note**:
```release-note
Add support to take nominated pods into account during scheduling to avoid starvation of higher priority pods.
```
/sig scheduling
ref/ #47604
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Take disk requests into account during evictions
fixes#54314
This PR is part of the local storage feature, and it makes the eviction manager take disk requests into account during disk evictions.
This uses the same eviction strategy as we do for memory.
Disk requests are only considered when the LocalStorageCapacityIsolation feature gate is enabled. This is enforced by adding a check for the feature gate in getRequests().
I have added unit testing to ensure that previous behavior is preserved when the feature gate is disabled.
Most of the changes are testing. Reviewers should focus on changes in **eviction/helpers.go**
/sig node
/assign @jingxu97 @vishh
Automatic merge from submit-queue (batch tested with PRs 56021, 55843, 55088, 56117, 55859). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Changes nvidia-gpu device plugin addon config settings:
- Runs as system critical pod
- Makes resource limits to match its resource requets
- Modifies test/e2e/scheduling/nvidia-gpus.go to cope with the recent
change of running the device plugin as a system addon.
- The resource settings of the addon is based on the test results
from 8 nvidia-tesla-k80 gpus.
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 56021, 55843, 55088, 56117, 55859). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Extends deviceplugin to gracefully handle full device plugin lifecycle.
**What this PR does / why we need it**:
- Instead of using cm.capacity field to communicate device plugin resource capacity,
this PR changes to use an explicit cm.GetDevicePluginResourceCapacity() function
that returns device plugin resource capacity as well as any inactive device plugin resource.
Kubelet syncNodeStatus call this function during its periodic run to update node status
capacity and allocatable. After this call, device plugin can remove the inactive device
plugin resource from its allDevices field as the update is already pushed to API server.
- Extends device plugin checkpoint data to record registered resources
so that we can finish resource removing even upon kubelet restarts.
- Passes sourcesReady from kubelet to device plugin to avoid removing
inactive pods during grace period of kubelet restart.
- Extends gpu_device_plugin e2e_node test to verify that scheduled pods
can continue to run even after device plugin deletion and kubelet
restarts.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Together with https://github.com/kubernetes/kubernetes/pull/54488, fixes https://github.com/kubernetes/kubernetes/issues/53395
**Special notes for your reviewer**:
**Release note**:
```release-note
Extends deviceplugin to gracefully handle full device plugin lifecycle.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fixes issue where PVCs using `standard` StorageClass create PDs in disks in wrong zone in multi-zone GKE clusters
Fixes#50115
Changed GetAllZones to only get zones with nodes that are currently running (renamed to GetAllCurrentZones). Added E2E test to confirm this behavior.
Automatic merge from submit-queue (batch tested with PRs 55112, 56029, 55740, 56095, 55845). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Updating vsphere cloud provider to support k8s cluster spread across multiple vCenters
**What this PR does / why we need it**:
vSphere cloud provider in Kubernetes 1.8 was designed to work only if all the nodes of the cluster are in one single datacenter folder. This is a hard restriction that makes the cluster not span across different folders/datacenter/vCenters. Users have use-cases to span the cluster across datacenters/vCenters.
**Which issue(s) this PR fixes**
Fixes # https://github.com/vmware/kubernetes/issues/255
**Special notes for your reviewer**:
This is a change purely in vsphere cloud provider and no changes in kubernetes core are needed.
**Release note**:
```release-note
With this change
- User should be able to create k8s cluster which spans across multiple ESXi clusters, datacenters or even vCenters.
- vSphere cloud provider (VCP) uses OS hostname and not vSphere Inventory VM Name.
That means, now VCP can handle cases where user changes VM inventory name.
- VCP can handle cases where VM migrates to other ESXi cluster or datacenter or vCenter.
The only requirement is the shared storage. VCP needs shared storage on all Node VMs.
```
Internally tested and reviewed the code.
@tthole, @shaominchen, @abrarshivani
Automatic merge from submit-queue (batch tested with PRs 54824, 55911, 55730, 55979, 55961). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add options for mounting SCSI or NVMe local SSD though Block or Filesystem and do all of that with UUID
Fixes: #51431
Fixed version of: #53466
Mount SCSI local SSD by UUID in /mnt/disks/by-uuid/, also allows for users to request and mount NVMe disks. Both types of disks will be accessible either through block or file-system.
I have confirmed that it is no longer crashing when nodes are initialized on GKE.
- Runs as system critical pod
- Makes resource limits to match its resource requets
- Modifies test/e2e/scheduling/nvidia-gpus.go to cope with the recent
change of running the device plugin as a system addon.
- The resource settings of the addon is based on the test results
from 8 nvidia-tesla-k80 gpus.
This PR adds pod-level ephemeral storage metric into Summary API.
Pod-level ephemeral storage usage is the sum of all containers and local
ephemeral volume including EmptyDir (if not backed up by memory or
hugepages), configueMap, and downwardAPI.
running (renamed to GetAllCurrentZones). Added E2E test to confirm this
behavior.
Added node informer to cloud-provider controller to keep track of zones
with k8s nodes in them.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix potential unexpected object mutation that can lead to data races
**What this PR does / why we need it**:
In #51526 I introduced an optimization - do a deep copy instead of to and from JSON roundtrip to convert anything that implements `runtime.Unstructured`. I just discovered that the method that is used there `UnstructuredContent()` in both `Unstructured` and `UnstructuredList` may mutate the original object.
2008750398/staging/src/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured/unstructured.go (L87-L92)7c10cbc642/staging/src/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured/unstructured_list.go (L58-L75)
This is problematic because previously (before #51526) there was no mutation and because this is unexpected and may lead to data races - it is bad behaviour to mutate original object when you just want a copy of it.
This PR fixes the issue.
Without the fix the tests I've added are failing because when comparison is done original object is not the same:
```
converter_test.go:154: Object changed, diff:
object.Object[items]:
a: []interface {}{}
b: <nil>
converter_test.go:154: Object changed, diff:
object.Object[items]:
a: []interface {}{map[string]interface {}{"kind":"Pod"}}
b: <nil>
```
However the underlying issue is not fixed here - `UnstructuredContent()` is brittle and dangerous. Method name does not imply that it mutates data when you call it. And godoc does not mention that either:
509df603b1/staging/src/k8s.io/apimachinery/pkg/runtime/interfaces.go (L233-L249)
Something needs to be done about it IMO.
Also `UnstructuredContent()` implementation in `UnstructuredList` does not implement the behaviour required by godoc in `runtime.Unstructured`.
**Release note**:
```release-note
NONE
```
/kind bug
/sig api-machinery
/assign @sttts
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
NetworkPolicy e2e: named port egress test
**What this PR does / why we need it**:
Add an e2e NetworkPolicy test that ensures that an egress rule that specifies a named port properly applies to egress traffic.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52040
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54837, 55970, 55912, 55898, 52977). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix Flaky Allocatable Setup Tests
**What this PR does / why we need it**:
Fixes a flaky node e2e serial test.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#55830
**Special notes for your reviewer**:
The test was flaking because we were reading the node status before the restarted kubelet had written it.
This fixes this by waiting until we see an updated node status (looking at the condition's heartbeat time).
This also fixes an incorrect error message.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54837, 55970, 55912, 55898, 52977). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Use framework.ConformanceIt for node e2e conformance tests
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
ref #54726#53909
**Special notes for your reviewer**:
/cc @mml
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55642, 55897, 55835, 55496, 55313). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Table printers and server generation should always copy ListMeta
Tables should be a mapping from lists, so if the incoming object has these add them to the table. Paging over server side tables was broken without this. Add tests on the generic creater and on the resttest compatibility.
@deads2k
Automatic merge from submit-queue (batch tested with PRs 55642, 55897, 55835, 55496, 55313). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Disable container disk metrics when using the CRI stats integration
Issue: https://github.com/kubernetes/kubernetes/issues/51798
As explained in the issue, runtimes which make use of the CRI Stats API still have the performance overhead of collecting those same stats through cAdvisor.
The CRI Stats API has metrics for CPU, Memory, and Disk. This PR significantly reduces the added overhead due to collecting these stats in both cAdvisor and in the runtime.
This PR disables container disk metrics, which are very expensive to collect.
This PR does not disable node-level disk stats, as the "Raw" container handler does not currently respect ignoring DiskUsageMetrics.
This PR factors out the logic for determining whether or not to use the CRI stats provider into a helper function, as cAdvisor is instantiated before it is passed to the kubelet as a dependency.
cc @kubernetes/sig-node-pr-reviews @derekwaynecarr
/kind feature
/sig node
/assign @Random-Liu @derekwaynecarr
Automatic merge from submit-queue (batch tested with PRs 54556, 55379, 55881, 55891, 55705). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adds node auto-repair e2e tests.
This PR adds node auto-repair e2e tests.
- vsphere.conf (cloud-config) is now needed only on master node
- VCP uses OS hostname and not vSphere inventory name
- VCP is now resilient to VM inventory name change and VM migration
AdmissionResponse allows mutating webhook to send apiserver a json patch
to mutate the object.
This reflects the imperative nature of AdmissionReview. It adds
AdmissionRequest and AdmissionResponse in place of status/spec.
The AdmissionResponse the allows the mutating webhook
to send back a json path with the mutated version of the requested
object.
Fixed the integration test to clean up properly.
Switched test image to 1.8v5 to reflect API changes.
Make sure to cache test framework client for cleaup test code.
Switched to pointer for patch type.
Factored in @liggitt's feedback.
Factored in @lavalamp's feedback.
Automatic merge from submit-queue (batch tested with PRs 55254, 55525, 50108, 54674, 55263). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Correct clean up actions in e2e tests
**What this PR does / why we need it**:
Remove the duplicate "cleanup action" code in `test/e2e/e2e.go`, and use the clean up code in test/e2e/framework instead.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#55505
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Chunking is now beta and on by default. The kops job is still using
etcd2 which does not support chunking, so flag the test as skipped until
kops is updated to a supported etcd version.
Tables should be a mapping from lists, so if the incoming object has
these add them to the table. Allows paging over server side tables.
Add tests on the generic creater and on the resttest compatibility.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Lift embedded structure out of eviction-related KubeletConfiguration fields
- Changes the following KubeletConfiguration fields from `string` to
`map[string]string`:
- `EvictionHard`
- `EvictionSoft`
- `EvictionSoftGracePeriod`
- `EvictionMinimumReclaim`
- Adds flag parsing shims to maintain Kubelet's public flags API, while
enabling structured input in the file API.
- Also removes `kubeletconfig.ConfigurationMap`, which was an ad-hoc flag
parsing shim living in the kubeletconfig API group, and replaces it
with the `MapStringString` shim introduced in this PR. Flag parsing
shims belong in a common place, not in the kubeletconfig API.
I manually audited these to ensure that this wouldn't cause errors
parsing the command line for syntax that would have previously been
error free (`kubeletconfig.ConfigurationMap` was unique in that it
allowed keys to be provided on the CLI without values. I believe this was
done in `flags.ConfigurationMap` to facilitate the `--node-labels` flag,
which rightfully accepts value-free keys, and that this shim was then
just copied to `kubeletconfig`). Fortunately, the affected fields
(`ExperimentalQOSReserved`, `SystemReserved`, and `KubeReserved`) expect
non-empty strings in the values of the map, and as a result passing the
empty string is already an error. Thus requiring keys shouldn't break
anyone's scripts.
- Updates code and tests accordingly.
Regarding eviction operators, directionality is already implicit in the
signal type (for a given signal, the decision to evict will be made when
crossing the threshold from either above or below, never both). There is
no need to expose an operator, such as `<`, in the API. By changing
`EvictionHard` and `EvictionSoft` to `map[string]string`, this PR
simplifies the experience of working with these fields via the
`KubeletConfiguration` type. Again, flags stay the same.
Other things:
- There is another flag parsing shim, `flags.ConfigurationMap`, from the
shared flag utility. The `NodeLabels` field still uses
`flags.ConfigurationMap`. This PR moves the allocation of the
`map[string]string` for the `NodeLabels` field from
`AddKubeletConfigFlags` to the defaulter for the external
`KubeletConfiguration` type. Flags are layered on top of an internal
object that has undergone conversion from a defaulted external object,
which means that previously the mere registration of flags would have
overwritten any previously-defined defaults for `NodeLabels` (fortunately
there were none).
Related: #53833 (lifting embedded structures out of string fields is part of getting this API to beta)
```release-note
The EvictionHard, EvictionSoft, EvictionSoftGracePeriod, EvictionMinimumReclaim, SystemReserved, and KubeReserved fields in the KubeletConfiguration object (kubeletconfig/v1alpha1) are now of type map[string]string, which facilitates writing JSON and YAML files.
```
- Changes the following KubeletConfiguration fields from `string` to
`map[string]string`:
- `EvictionHard`
- `EvictionSoft`
- `EvictionSoftGracePeriod`
- `EvictionMinimumReclaim`
- Adds flag parsing shims to maintain Kubelet's public flags API, while
enabling structured input in the file API.
- Also removes `kubeletconfig.ConfigurationMap`, which was an ad-hoc flag
parsing shim living in the kubeletconfig API group, and replaces it
with the `MapStringString` shim introduced in this PR. Flag parsing
shims belong in a common place, not in the kubeletconfig API.
I manually audited these to ensure that this wouldn't cause errors
parsing the command line for syntax that would have previously been
error free (`kubeletconfig.ConfigurationMap` was unique in that it
allowed keys to be provided on the CLI without values. I believe this was
done in `flags.ConfigurationMap` to facilitate the `--node-labels` flag,
which rightfully accepts value-free keys, and that this shim was then
just copied to `kubeletconfig`). Fortunately, the affected fields
(`ExperimentalQOSReserved`, `SystemReserved`, and `KubeReserved`) expect
non-empty strings in the values of the map, and as a result passing the
empty string is already an error. Thus requiring keys shouldn't break
anyone's scripts.
- Updates code and tests accordingly.
Regarding eviction operators, directionality is already implicit in the
signal type (for a given signal, the decision to evict will be made when
crossing the threshold from either above or below, never both). There is
no need to expose an operator, such as `<`, in the API. By changing
`EvictionHard` and `EvictionSoft` to `map[string]string`, this PR
simplifies the experience of working with these fields via the
`KubeletConfiguration` type. Again, flags stay the same.
Other things:
- There is another flag parsing shim, `flags.ConfigurationMap`, from the
shared flag utility. The `NodeLabels` field still uses
`flags.ConfigurationMap`. This PR moves the allocation of the
`map[string]string` for the `NodeLabels` field from
`AddKubeletConfigFlags` to the defaulter for the external
`KubeletConfiguration` type. Flags are layered on top of an internal
object that has undergone conversion from a defaulted external object,
which means that previously the mere registration of flags would have
overwritten any previously-defined defaults for `NodeLabels` (fortunately
there were none).
Automatic merge from submit-queue (batch tested with PRs 55697, 55631, 51905, 55647, 55826). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add e2e test for mount propagation
**What this PR does / why we need it**:
This adds e2e test for mount propagation introduced by #46444.
@kubernetes/sig-node-pr-reviews
/sig node
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 55697, 55631, 51905, 55647, 55826). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix failed to access service of e2e test
**What this PR does / why we need it**:
We should create service before deployments as said in the issue.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes [#55696](https://github.com/kubernetes/kubernetes/issues/55696)
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55798, 49579, 54862, 55188, 51990). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add special tag for disabling ESIPP and HPA-related tests on large clusters
As discussed offline, this would help improve accountability for tests needing some love from scalability perspective.
/cc @porridge
fyi - @MrHohn @MaciekPytel @mwielgus @crassirostris
@kubernetes/sig-scalability-misc
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Bump addon manager version used to 6.5
**What this PR does / why we need it**:
Bump addon manager version to use #55466. This adds leader election-like mechanism to addon manager.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
**Special notes for your reviewer**:
Release note copied from #55466. This is intended to be cherrypicked into 1.7 and 1.8 branches.
**Release note**:
```release-note
Addon manager supports HA masters.
```
Automatic merge from submit-queue (batch tested with PRs 55682, 55444, 55456, 55717, 55131). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add sig storage label to multizone static PV test
**What this PR does / why we need it**:
Adds sig storage tag to e2e test so it shows up on our testgrid dashboard
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55764, 55683, 55468, 54409, 55546). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Wait for server resources.
For https://github.com/kubernetes/kubernetes/issues/55768.
In e2e test for containerd, I sometimes see the following fail (e.g. https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-cri-containerd-e2e-gci-gce/178):
```
Nov 15 02:40:31.291: Couldn't delete ns: "e2e-tests-container-probe-dcvlw": unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server could not find the requested resource (&discovery.ErrGroupDiscoveryFailed{Groups:map[schema.GroupVersion]error{schema.GroupVersion{Group:"metrics.k8s.io", Version:"v1beta1"}:(*errors.StatusError)(0xc420bfd170)}})
```
Usually, only the first few tests fail with this error. The error seems to be returned at this line https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/util.go#L1170.
@cheftako @caesarxuchao Does this change make sense to you? Or should I wait for something else to become ready?
/cc @kubernetes/sig-api-machinery-pr-reviews
**Release note**:
```release-note
none
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add options for mounting SCSI or NVMe local SSD though Block or Filesystem and do all of that with UUID
Fixes: #51431
Mount SCSI local SSD by UUID in /mnt/disks/by-uuid/, also allows for users to request and mount NVMe disks. Both types of disks will be accessable either through block or filesystem
To see code in progress for NVMe and block support see working branch: https://github.com/davidz627/kubernetes/tree/localExt
Automatic merge from submit-queue (batch tested with PRs 54436, 53148, 55153, 55614, 55484). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Support multizone clusters in GCE and GKE e2e tests
**What this PR does / why we need it**:
For multi-zone clusters we can't rely on zone parameter for fetching information on Instance Groups. Instead we first fetch the zone the group is in to use in subsequent calls.
Note that current version of the code does not work for multi zone clusters at all.
**Release note**:
```
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55648, 55274, 54982, 51955, 55639). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Node e2e: add a cloud-init script to disable live-restore in node e2e test
This cloud-init config will be used in tests in https://github.com/kubernetes/test-infra.
**Release note**:
```
None
```
/assign @yujuhong
/cc @abgworrall @dchen1107
Automatic merge from submit-queue (batch tested with PRs 55648, 55274, 54982, 51955, 55639). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Swap NetworkPolicy storage to networking.k8s.io/v1
Finishes(?) the NetworkPolicy v1 migration.
Fixes#50604
The integration test passes. I copied the test-update-storage-objects.sh change from #50327 and have no idea if it's right.
/cc @sttts @caesarxuchao @thockin
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 53780, 55663, 55321, 52421, 55659). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
remove duplicated import
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53780, 55663, 55321, 52421, 55659). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
add hostip and protocol to the hostport predicates
**What this PR does / why we need it**:
This PR adds "hostIP and protocol" to scheduler hostport predicate procedure
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fix#51950
**Special notes for your reviewer**:
- [x] basic implementation, need review
- [x] e2e test
- [x] update doc (will be done in seperate PR)
**Release note**:
```release-note
add hostIP and protocol to the original hostport predicates procedure in scheduler.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Introduce new `VolumeAttachment` API Object
**What this PR does / why we need it**:
Introduce a new `VolumeAttachment` API Object. This object will be used by the CSI volume plugin to enable external attachers (see design [here](https://github.com/kubernetes/community/pull/1258). In the future, existing volume plugins can be refactored to use this object as well.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: Part of issue https://github.com/kubernetes/features/issues/178
**Special notes for your reviewer**:
None
**Release note**:
```release-note
NONE
```
Introduce the v1alpha1 version to the Kubernetes storage API. And add a
new VolumeAttachment object to that version. This object will initially
be used only by the new CSI Volume Plugin. Eventually existing volume
plugins can be refactored to use it too.
Device plugin is now an addon and its manifest is now in
kubernetes/kubernetes. The manifest on
GoogleCloudPlatform/container-engine-accelerators no longer contains
device plugin.
There are several locations in the e2e tests where endpoints of the
form IP:port use IPv6 addresses directly, without surrounding brackets.
Brackets are required around IPv6 addresses in this case, in order to
distinguish the colons in the IPv6 address from the colon immediately
preceding the port.
Also, wherever the curl command might be used with an IPv6 address
surrounded in brackets, the "-g" argument is added to the curl
command line arguments so that the brackets can be interpreted
correctly.
fixes#52746
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Regional support in Cluster Autoscaler e2e tests.
**What this PR does / why we need it**:
When calling GKE API and gcloud in Autoscaling e2 tests, take into account that clusters can be regional.
This currently uses MultiZonal as an indicator that cluster is regional, which is suboptimal, but considering that our tests do not work with multizonal clusters at the moment, there is no regression.This should be changed once there is an indicator available that the cluster is regional.
**Release note**:
```
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
E2E test to verify pod failover during node power-off
**What this PR does / why we need it**:
This PR adds test to verify volume status after the node where the pod got provisioned being powered off and failed over to a different node.
Test performs following tasks:
1. Create a StorageClass
2. Create a PVC with the StorageClass
3. Create a Deployment with 1 replica, using the PVC
4. Verify the pod got provisioned on a node
5. Verify the volume is attached to the node
6. Power off the node where pod got provisioned
7. Verify the pod got provisioned on a different node
8. Verify the volume is attached to the new node
9. Verify the volume is detached from the previous node
10. Power on the previous node
11. Delete the Deployment
12. Delete the PVC
13. Delete the StorageClass
**Which issue this PR fixes**:
Fixes https://github.com/vmware/kubernetes/issues/272
**Special notes for your reviewer**:
Test logs:
```
# go run hack/e2e.go --check-version-skew=false --v --test --test_args='--ginkgo.focus=Node\sPoweroff'
flag provided but not defined: -check-version-skew
Usage of /tmp/go-build212295472/command-line-arguments/_obj/exe/e2e:
-get
go get -u kubetest if old or not installed (default true)
-old duration
Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/24 11:48:28 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/24 11:48:28 e2e.go:56: Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/24 11:48:28 e2e.go:57: The separator is required to use --get or --old flags
2017/10/24 11:48:28 e2e.go:58: The -- flag separator also suppresses this message
2017/10/24 11:48:28 e2e.go:77: Calling kubetest --check-version-skew=false --v --test --test_args=--ginkgo.focus=Node\sPoweroff...
2017/10/24 11:48:28 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/24 11:48:28 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 350.700421ms
2017/10/24 11:48:28 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1627+54fc02df4a3a2a", GitCommit:"54fc02df4a3a2a12e14fb72d84a1aaa658ba6689", GitTreeState:"clean", BuildDate:"2017-10-24T18:33:37Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1437+ba66fcb63de9e9", GitCommit:"ba66fcb63de9e9b72e2ccf8b823df33a22df0522", GitTreeState:"clean", BuildDate:"2017-10-20T07:16:05Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}
2017/10/24 11:48:28 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 315.334518ms
2017/10/24 11:48:28 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=Node\sPoweroff
Conformance test: not doing test setup.
Oct 24 11:48:30.391: INFO: Overriding default scale value of zero to 1
Oct 24 11:48:30.391: INFO: Overriding default milliseconds value of zero to 5000
I1024 11:48:30.637436 409 e2e.go:378] Starting e2e run "ed9fdfc7-b8eb-11e7-a595-0050569c26b8" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508870909 - Will randomize all specs
Will run 1 of 717 specs
Oct 24 11:48:30.678: INFO: >>> kubeConfig: /root/.kube/config
Oct 24 11:48:30.685: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 24 11:48:30.719: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 24 11:48:30.857: INFO: 17 / 17 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 24 11:48:30.857: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 24 11:48:30.863: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 24 11:48:30.863: INFO: Dumping network health container logs from all nodes...
Oct 24 11:48:30.877: INFO: Client version: v1.9.0-alpha.1.1627+54fc02df4a3a2a
Oct 24 11:48:30.879: INFO: Server version: v1.9.0-alpha.1.1437+ba66fcb63de9e9
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
verify volume status after node power off
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:149
[BeforeEach] [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 24 11:48:30.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:64
Oct 24 11:48:30.984: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
[It] verify volume status after node power off
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:149
STEP: Creating a Storage Class
STEP: Creating PVC using the Storage Class
STEP: Waiting for PVC to be in bound phase
Oct 24 11:48:31.141: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-zxz56 to have phase Bound
Oct 24 11:48:31.150: INFO: PersistentVolumeClaim pvc-zxz56 found but phase is Pending instead of Bound.
Oct 24 11:48:33.155: INFO: PersistentVolumeClaim pvc-zxz56 found and phase=Bound (2.013403698s)
STEP: Creating a Deployment
I1024 11:48:33.180161 409 deployment_util.go:254] Waiting deployment "deployment-ef6b820e-b8eb-11e7-a595-0050569c26b8" to complete
Oct 24 11:48:33.192: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1beta1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Oct 24 11:48:35.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:37.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:39.196: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:41.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:43.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:45.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:47.198: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:49.198: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:51.196: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:53.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
STEP: Get pod from the deployement
STEP: Verify disk is attached to the node: kubernetes-node5
STEP: Power off the node: kubernetes-node5
Oct 24 11:49:07.337: INFO: Waiting for pod to be failed over from "kubernetes-node5"
Oct 24 11:49:17.336: INFO: Waiting for pod to be failed over from "kubernetes-node5"
Oct 24 11:49:27.340: INFO: Waiting for pod to be failed over from "kubernetes-node5"
Oct 24 11:49:37.340: INFO: The pod has been failed over from "kubernetes-node5" to "kubernetes-node7"
STEP: Waiting for disk to be attached to the new node: kubernetes-node7
Oct 24 11:49:47.534: INFO: Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" has successfully attached to "kubernetes-node7".
STEP: Waiting for disk to be detached from the previous node: kubernetes-node5
Oct 24 11:49:57.707: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:07.702: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:17.710: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:27.733: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:37.713: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:47.723: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:57.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:07.710: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:17.719: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:27.716: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:37.717: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:47.712: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:57.707: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:07.724: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:17.716: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:27.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:37.716: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:47.709: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:57.714: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:07.715: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:17.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:27.714: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:37.713: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:47.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:57.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:07.712: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:17.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:27.712: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:37.707: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:47.698: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:57.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:07.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:17.699: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:27.702: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:37.704: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:47.703: INFO: Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" has successfully detached from "kubernetes-node5".
STEP: Power on the previous node: kubernetes-node5
Oct 24 11:55:49.168: INFO: Deleting PersistentVolumeClaim "pvc-zxz56"
[AfterEach] [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 24 11:55:49.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-node-poweroff-l245b" for this suite.
Oct 24 11:55:57.630: INFO: namespace: e2e-tests-node-poweroff-l245b, resource: bindings, ignored listing per whitelist
Oct 24 11:55:57.643: INFO: namespace e2e-tests-node-poweroff-l245b deletion completed in 8.379395732s
• [SLOW TEST:446.758 seconds]
[sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
verify volume status after node power off
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:149
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 24 11:55:57.647: INFO: Running AfterSuite actions on all node
Oct 24 11:55:57.647: INFO: Running AfterSuite actions on node 1
Ran 1 of 717 Specs in 446.969 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 716 Skipped PASS
Ginkgo ran 1 suite in 7m27.797177022s
Test Suite Passed
2017/10/24 11:55:57 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=Node\sPoweroff' finished in 7m28.760818768s
2017/10/24 11:55:57 e2e.go:81: Done
```
VMware Reviewers: @divyenpatel @pshahzeb
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55009, 55532, 55601, 52569, 55533). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Webhook e2e test: PUT and PATCH operations
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Ref: https://github.com/kubernetes/features/issues/492
**Special notes for your reviewer**: ~depends on #55127~ (merged)
@kubernetes/sig-api-machinery-api-reviews
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55009, 55532, 55601, 52569, 55533). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Validate that PV capacity and PVC capacity requests are positive, greater than 0
**What this PR does / why we need it**: Zero (0) capacity PVs cause related pods to fail, and zero (0) capacity PVCs create zero (0) capacity PVs.
**Which issue(s) this PR fixes** :
Fixes#55553
**Special notes for your reviewer**:
**Release note**:
```release-note
Validate positive capacity for PVs and PVCs.
```
Automatic merge from submit-queue (batch tested with PRs 55009, 55532, 55601, 52569, 55533). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add empty dir and host related conformance annotations
Signed-off-by: Brad Topol <btopol@us.ibm.com>
Add empty dir and host related conformance annotations
/sig testing
/area conformance
@sig-testing-pr-reviews
This PR adds pod related conformance annotations to the e2e test suite.
The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the empty dir and host based e2e conformance tests.
Special notes for your reviewer:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.
**Release note**:
```release-note NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Extend test/e2e/scheduling/nvidia-gpus.go to track resource usage of
installer and device plugin containers.
To support this, exports certain functions and fields in
framework/resource_usage_gatherer.go so that it can be used in any
e2e test to track any specified pod resource usage with the specified
probe interval and duration.
**What this PR does / why we need it**:
We need to quantify the resource usage of the device plugin DaemonSet to make sure it can run reliably on nodes with GPUs.
We also want to measure gpu driver installer resource usage to track any unexpected resource consumption during driver installation.
For the later part, see a related issue https://github.com/kubernetes/features/issues/368.
Example resource summary output:
Oct 6 12:35:07.289: INFO: Printing summary: ResourceUsageSummary
Oct 6 12:35:07.289: INFO: ResourceUsageSummary JSON
{
"100": [
{
"Name": "nvidia-device-plugin-6kqxp/nvidia-device-plugin",
"Cpu": 0.000507167,
"Mem": 2134016
},
{
"Name": "nvidia-device-plugin-6kqxp/nvidia-driver-installer",
"Cpu": 1.915508718,
"Mem": 663330816
},
{
"Name": "nvidia-device-plugin-l28zc/nvidia-device-plugin",
"Cpu": 0.000836256,
"Mem": 2211840
},
{
"Name": "nvidia-device-plugin-l28zc/nvidia-driver-installer",
"Cpu": 1.916886293,
"Mem": 691449856
},
{
"Name": "nvidia-device-plugin-xb4vh/nvidia-device-plugin",
"Cpu": 0.000515103,
"Mem": 2265088
},
{
"Name": "nvidia-device-plugin-xb4vh/nvidia-driver-installer",
"Cpu": 1.909435982,
"Mem": 832430080
}
],
"50": [
{
...
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 54005, 55127, 53850, 55486, 53440). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Validation webhook plugin converts objects to the external version before sending to webhooks
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
https://github.com/kubernetes/features/issues/492
**Special notes for your reviewer**:
**Release note**:
```release-note
The apiserver sends external versioned object to the admission webhooks now. Please update the webhooks to expect admissionReview.spec.object.raw to be serialized external versions of objects.
```
installer and device plugin containers.
To support this, exports certain functions and fields in
framework/resource_usage_gatherer.go so that it can be used in any
e2e test to track any specified pod resource usage with the specified
probe interval and duration.
Automatic merge from submit-queue (batch tested with PRs 54826, 53576, 55591, 54946, 54825). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add downward api and docker container conformance annotations
Signed-off-by: Brad Topol <btopol@us.ibm.com>
Add downward api and docker container conformance annotations
/sig testing
/area conformance
@sig-testing-pr-reviews
This PR adds downward api and docker container related conformance annotations to the e2e test suite.
The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the downward api and docker container based e2e conformance tests.
Special notes for your reviewer:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.
**Release note**:
```release-note NONE
```
Automatic merge from submit-queue (batch tested with PRs 54826, 53576, 55591, 54946, 54825). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add dns, configmap, and custom resource definition conformance
annotations.
Signed-off-by: Brad Topol <btopol@us.ibm.com>
Add dns, configmap, and custom resource definition related conformance annotations
/sig testing
/area conformance
@sig-testing-pr-reviews
This PR adds pod related conformance annotations to the e2e test suite.
The PR fixes a portion of #53822. It focuses on adding conformance annotations as defined by the Kubernetes Conformance Workgroup for a subset of the dns, configmap, and custom resource definition based e2e conformance tests.
Special notes for your reviewer:
Please see https://docs.google.com/spreadsheets/d/1WWSOqFaG35VmmPOYbwetapj1VPOVMqjZfR9ih5To5gk/edit#gid=62929400
for the list of SIG Arch approved test names and descriptions that I am using.
**Release note**:
```release-note NONE
```
Automatic merge from submit-queue (batch tested with PRs 55283, 55461, 55288, 53970, 55487). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Support collecting log for alternative container runtime in e2e test.
Fixes https://github.com/kubernetes/kubernetes/issues/55629.
Add support to collect logs for alternative container runtime in e2e.
Example for `cri-containerd`:
```
$ go run hack/e2e.go -- --test -v --test_args="--report-dir=$PWD --container-runtime-services=cri-containerd,containerd,cri-containerd-installation"
```
```release-note
none
```
/cc @kubernetes/sig-node-pr-reviews @kubernetes/sig-testing-pr-reviews
When calling GKE API andd gcloud, take into account
that clusters can be regional.
This currently uses MultiZonal as an indicator that
cluster is regional, which is suboptimal, but considering
that our tests do not work with multizonal clusters
at the moment, there is no regression.
This should be changed once there is an indicator available
that the cluster is regional.
Automatic merge from submit-queue (batch tested with PRs 55594, 47849, 54692, 55478, 54133). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Use HPA permissions to read custom metrics in Custom Metrics e2e test
**What this PR does / why we need it**:
This PR fixes e2e test for Stackdriver Custom Metrics on GKE. With PR: https://github.com/kubernetes/kubernetes/pull/55387 it will be also necessary for analogous test on GCE.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
[Part 1] Remove docker dep in kubelet startup
**What this PR does / why we need it**:
Remove dependency of docker during kubelet start up.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
Part 1 of #54090
**Special notes for your reviewer**:
Changes include:
1. Move docker client initialization into dockershim pkg.
2. Pass a docker `ClientConfig` from kubelet to dockershim
3. Pass parameters needed by `FakeDockerClient` thru `ClientConfig` to dockershim
(TODO, the second part) Make dockershim tolerate when dockerd is down, otherwise it will still fail kubelet
Please note after this PR, kubelet will still fail if dockerd is down, this will be fixed in the subsequent PR by making dockershim tolerate dockerd failure (initializing docker client in a separate goroutine), and refactoring cgroup and log driver detection.
**Release note**:
```release-note
Remove docker dependency during kubelet start up
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Check dup NodePort with protocols when update services
**What this PR does / why we need it**:
As the title says.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#48579fixes: #54898fixes: #55327
**Special notes for your reviewer**:
/assign @freehan
/cc @cblecker
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52461, 55503). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
A few improvements to the conformance regtesst
- Set OWNERS files to disallow parent approvers (doesn't work yet, but should be live next week.)
- Document how to fix failing test.
- Add a better error message.
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add performance test phase timing export.
**What this PR does / why we need it**:
First step totwards allowing us to get a quick overview of test length
via perf-dash.k8s.io.
**Release note**:
```release-note
NONE
```
@kubernetes/sig-scalability-feature-requests
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
missing the format string
Signed-off-by: yanxuean <yan.xuean@zte.com.cn>
**What this PR does / why we need it**:
missing the format string
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53047, 54861, 55413, 55395, 55308). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Switch internal scale type to autoscaling, enable apps/v1 scale subresources
xref #49504
* Switch workload internal scale type to autoscaling.Scale (internal-only change)
* Enable scale subresources for apps/v1 deployments, replicasets, statefulsets
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55394, 55412). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix influxdb e2e test failure.
In scalability testing influxdb was recently disabled, but we still
trying to execute corresponidng test, as a result it fails all the time.
Skip test if influxdb is disabled.
Fixes#54636
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55394, 55412). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adds e2e tests for Pod Priority and Preemption in Cluster Autoscaler
This PR adds e2e tests for Pod Priority and Preemption in Clucter Autoscaler:
- shouldn't scale up when expendable pod is created
- should scale up when non expendable pod is created
- shouldn't scale up when expendable pod is preempted
- should scale down when expendable pod is running
- shouldn't scale down when non expendable pod is running
Automatic merge from submit-queue (batch tested with PRs 55265, 54092, 55353, 53733, 55385). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
E2E Performance test to print latency numbers for vsphere volume lifecycle operations
**What this PR does / why we need it**:
This PR introduces test that prints latency numbers for volume lifecycle operations.
The operations that are evaluated are:
1. Create n number of PVCs
2. Create pods with these PVCs and ensure pods are in ready state
3. Delete pods
4. Delete the PVCs
**Which issue this PR fixes** : fixes vmware#292
**Special notes for your reviewer**:
1. This PR has some duplicate code changes from existing open PRs to add e2e tests. If those PRs are merged before, I ll rebase this PR to avoid redundant changes.
2. Following are the test logs with total number of volumes as 12, volumes per pod as 4 and total iterations of test to be 3.
<details>
Test logs:
```
pshahzeb-m01:kubernetes_2 pshahzeb$ go run hack/e2e.go --check-version-skew=false -v -test --test_args='--ginkgo.focus=vcp-performance'
flag provided but not defined: -check-version-skew
Usage of /var/folders/97/lnlv1n317xl2ty8hdn7zptxr00b37m/T/go-build041717622/command-line-arguments/_obj/exe/e2e:
-get
go get -u kubetest if old or not installed (default true)
-old duration
Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/16 15:11:29 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/16 15:11:29 e2e.go:56: Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/16 15:11:29 e2e.go:57: The separator is required to use --get or --old flags
2017/10/16 15:11:29 e2e.go:58: The -- flag separator also suppresses this message
2017/10/16 15:11:29 e2e.go:77: Calling kubetest --check-version-skew=false -v -test --test_args=--ginkgo.focus=vcp-performance...
2017/10/16 15:11:29 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/16 15:11:29 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 280.313212ms
2017/10/16 15:11:29 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17390+60c9e59ad2b417-dirty", GitCommit:"60c9e59ad2b4179a4b6e89343cfeb9eb73a9d6b7", GitTreeState:"dirty", BuildDate:"2017-10-13T18:35:56Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1181+77b83e446b4e65", GitCommit:"77b83e446b4e655a71c315ad3f3890dc2a220ccf", GitTreeState:"clean", BuildDate:"2017-10-16T07:07:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
2017/10/16 15:11:30 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 156.135002ms
2017/10/16 15:11:30 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance
Conformance test: not doing test setup.
Oct 16 15:11:30.867: INFO: Overriding default scale value of zero to 1
Oct 16 15:11:30.867: INFO: Overriding default milliseconds value of zero to 5000
I1016 15:11:30.981146 6068 e2e.go:383] Starting e2e run "f687717b-b2be-11e7-b207-784f435ee632" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508191890 - Will randomize all specs
Will run 1 of 706 specs
Oct 16 15:11:31.007: INFO: >>> kubeConfig: /tmp/kube199.json
Oct 16 15:11:31.018: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 16 15:11:31.061: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 16 15:11:31.155: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 16 15:11:31.155: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 16 15:11:31.163: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 16 15:11:31.163: INFO: Dumping network health container logs from all nodes...
Oct 16 15:11:31.177: INFO: Client version: v1.6.0-alpha.0.17391+4a39b17440feee-dirty
Oct 16 15:11:31.181: INFO: Server version: v1.9.0-alpha.1.1181+77b83e446b4e65
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] vcp-performance
vcp performance tests
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99
[BeforeEach] [sig-storage] vcp-performance
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 16 15:11:31.183: INFO: >>> kubeConfig: /tmp/kube199.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] vcp-performance
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:68
[It] vcp performance tests
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99
STEP: Creating Storage Class : sc-default
STEP: Creating Storage Class : sc-vsan
STEP: Creating Storage Class : sc-spbm
STEP: Creating Storage Class : sc-user-specified-ds
STEP: Creating 12 PVCs
Oct 16 15:11:31.708: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-5rrtp to have phase Bound
Oct 16 15:11:31.718: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:33.730: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:35.737: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:37.747: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:39.753: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:41.763: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:43.774: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:45.814: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:47.839: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:49.852: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:51.869: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:53.877: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:55.888: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:57.896: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:11:59.904: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:01.916: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:03.941: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:05.947: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:07.957: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:09.985: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:12.002: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:14.009: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:16.017: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:18.026: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:20.034: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:22.096: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:24.116: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:26.124: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:28.134: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:30.147: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:32.153: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:34.162: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:36.177: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:38.185: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:40.193: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:42.203: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:44.210: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:46.217: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:48.227: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:50.236: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:52.242: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:54.258: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:56.268: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:12:58.290: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:00.304: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:02.321: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:04.330: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:06.338: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:08.345: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:10.351: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:12.367: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:14.384: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:16.394: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:18.410: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:20.421: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:22.430: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:24.439: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:26.448: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:28.465: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:30.473: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:32.482: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:34.490: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:36.500: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:38.510: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:40.517: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
Oct 16 15:13:42.527: INFO: PersistentVolumeClaim pvc-5rrtp found but phase is Pending instead of Bound.
^C2017/10/16 15:13:43 util.go:176: Killing ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance(-5981) after receiving signal
2017/10/16 15:13:43 util.go:176: Killing ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance(-5981) after receiving signal
2017/10/16 15:13:43 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance' finished in 2m13.976765704s
2017/10/16 15:13:43 main.go:260: Something went wrong: encountered 1 errors: [error during ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance: signal: killed]
2017/10/16 15:13:43 e2e.go:79: err: exit status 1
exit status 1
pshahzeb-m01:kubernetes_2 pshahzeb$
pshahzeb-m01:kubernetes_2 pshahzeb$
pshahzeb-m01:kubernetes_2 pshahzeb$ make
+++ [1016 15:14:25] Building the toolchain targets:
k8s.io/kubernetes/hack/cmd/teststale
k8s.io/kubernetes/vendor/github.com/jteeuwen/go-bindata/go-bindata
+++ [1016 15:14:25] Generating bindata:
test/e2e/generated/gobindata_util.go
~/k8s/kubernetes_2 ~/k8s/kubernetes_2/test/e2e/generated
~/k8s/kubernetes_2/test/e2e/generated
+++ [1016 15:14:26] Building go targets for darwin/amd64:
cmd/kube-proxy
cmd/kube-apiserver
cmd/kube-controller-manager
cmd/cloud-controller-manager
cmd/kubelet
cmd/kubeadm
cmd/hyperkube
vendor/k8s.io/kube-aggregator
vendor/k8s.io/apiextensions-apiserver
plugin/cmd/kube-scheduler
cmd/kubectl
federation/cmd/kubefed
cmd/gendocs
cmd/genkubedocs
cmd/genman
cmd/genyaml
cmd/genswaggertypedocs
cmd/linkcheck
federation/cmd/genfeddocs
vendor/github.com/onsi/ginkgo/ginkgo
test/e2e/e2e.test
cmd/kubemark
vendor/github.com/onsi/ginkgo/ginkgo
cmd/gke-certificates-controller
pshahzeb-m01:kubernetes_2 pshahzeb$ go run hack/e2e.go --check-version-skew=false -v -test --test_args='--ginkgo.focus=vcp-performance'
flag provided but not defined: -check-version-skew
Usage of /var/folders/97/lnlv1n317xl2ty8hdn7zptxr00b37m/T/go-build763038738/command-line-arguments/_obj/exe/e2e:
-get
go get -u kubetest if old or not installed (default true)
-old duration
Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/16 15:16:03 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/16 15:16:03 e2e.go:56: Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/16 15:16:03 e2e.go:57: The separator is required to use --get or --old flags
2017/10/16 15:16:03 e2e.go:58: The -- flag separator also suppresses this message
2017/10/16 15:16:03 e2e.go:77: Calling kubetest --check-version-skew=false -v -test --test_args=--ginkgo.focus=vcp-performance...
2017/10/16 15:16:03 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/16 15:16:03 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 163.149145ms
2017/10/16 15:16:03 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17390+60c9e59ad2b417-dirty", GitCommit:"60c9e59ad2b4179a4b6e89343cfeb9eb73a9d6b7", GitTreeState:"dirty", BuildDate:"2017-10-13T18:35:56Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1181+77b83e446b4e65", GitCommit:"77b83e446b4e655a71c315ad3f3890dc2a220ccf", GitTreeState:"clean", BuildDate:"2017-10-16T07:07:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
2017/10/16 15:16:03 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 168.158343ms
2017/10/16 15:16:03 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance
Conformance test: not doing test setup.
Oct 16 15:16:04.325: INFO: Overriding default scale value of zero to 1
Oct 16 15:16:04.325: INFO: Overriding default milliseconds value of zero to 5000
I1016 15:16:04.425919 8714 e2e.go:383] Starting e2e run "9984ec93-b2bf-11e7-810d-784f435ee632" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508192163 - Will randomize all specs
Will run 1 of 706 specs
Oct 16 15:16:04.443: INFO: >>> kubeConfig: /tmp/kube199.json
Oct 16 15:16:04.453: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 16 15:16:04.500: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 16 15:16:04.598: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 16 15:16:04.598: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 16 15:16:04.607: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 16 15:16:04.607: INFO: Dumping network health container logs from all nodes...
Oct 16 15:16:04.626: INFO: Client version: v1.6.0-alpha.0.17391+4a39b17440feee-dirty
Oct 16 15:16:04.631: INFO: Server version: v1.9.0-alpha.1.1181+77b83e446b4e65
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] vcp-performance
vcp performance tests
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99
[BeforeEach] [sig-storage] vcp-performance
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 16 15:16:04.632: INFO: >>> kubeConfig: /tmp/kube199.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] vcp-performance
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:68
[It] vcp performance tests
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99
STEP: Creating Storage Class : sc-default
STEP: Creating Storage Class : sc-vsan
STEP: Creating Storage Class : sc-spbm
STEP: Creating Storage Class : sc-user-specified-ds
STEP: Creating 12 PVCs
Oct 16 15:16:05.313: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-l9tg4 to have phase Bound
Oct 16 15:16:05.359: INFO: PersistentVolumeClaim pvc-l9tg4 found but phase is Pending instead of Bound.
Oct 16 15:16:07.381: INFO: PersistentVolumeClaim pvc-l9tg4 found but phase is Pending instead of Bound.
Oct 16 15:16:09.389: INFO: PersistentVolumeClaim pvc-l9tg4 found but phase is Pending instead of Bound.
Oct 16 15:16:11.404: INFO: PersistentVolumeClaim pvc-l9tg4 found and phase=Bound (6.090428509s)
Oct 16 15:16:11.462: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-j9m85 to have phase Bound
Oct 16 15:16:11.476: INFO: PersistentVolumeClaim pvc-j9m85 found but phase is Pending instead of Bound.
Oct 16 15:16:13.489: INFO: PersistentVolumeClaim pvc-j9m85 found but phase is Pending instead of Bound.
Oct 16 15:16:15.502: INFO: PersistentVolumeClaim pvc-j9m85 found but phase is Pending instead of Bound.
Oct 16 15:16:17.509: INFO: PersistentVolumeClaim pvc-j9m85 found and phase=Bound (6.046381507s)
Oct 16 15:16:17.543: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-mc77p to have phase Bound
Oct 16 15:16:17.558: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:19.592: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:21.598: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:23.609: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:25.618: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:27.655: INFO: PersistentVolumeClaim pvc-mc77p found but phase is Pending instead of Bound.
Oct 16 15:16:29.699: INFO: PersistentVolumeClaim pvc-mc77p found and phase=Bound (12.155659079s)
Oct 16 15:16:29.801: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-2j86v to have phase Bound
Oct 16 15:16:29.815: INFO: PersistentVolumeClaim pvc-2j86v found and phase=Bound (14.767532ms)
Oct 16 15:16:29.847: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-q7rsq to have phase Bound
Oct 16 15:16:29.882: INFO: PersistentVolumeClaim pvc-q7rsq found but phase is Pending instead of Bound.
Oct 16 15:16:31.896: INFO: PersistentVolumeClaim pvc-q7rsq found and phase=Bound (2.048751822s)
Oct 16 15:16:31.928: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-qsh8l to have phase Bound
Oct 16 15:16:31.943: INFO: PersistentVolumeClaim pvc-qsh8l found and phase=Bound (14.944175ms)
Oct 16 15:16:31.975: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-52pcj to have phase Bound
Oct 16 15:16:31.993: INFO: PersistentVolumeClaim pvc-52pcj found and phase=Bound (17.704673ms)
Oct 16 15:16:32.021: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-v5x89 to have phase Bound
Oct 16 15:16:32.043: INFO: PersistentVolumeClaim pvc-v5x89 found and phase=Bound (21.44398ms)
Oct 16 15:16:32.073: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-f9pnm to have phase Bound
Oct 16 15:16:32.096: INFO: PersistentVolumeClaim pvc-f9pnm found but phase is Pending instead of Bound.
Oct 16 15:16:34.163: INFO: PersistentVolumeClaim pvc-f9pnm found but phase is Pending instead of Bound.
Oct 16 15:16:36.174: INFO: PersistentVolumeClaim pvc-f9pnm found and phase=Bound (4.100911147s)
Oct 16 15:16:36.224: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-m5fqt to have phase Bound
Oct 16 15:16:36.239: INFO: PersistentVolumeClaim pvc-m5fqt found and phase=Bound (14.819033ms)
Oct 16 15:16:36.284: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-mbsvx to have phase Bound
Oct 16 15:16:36.302: INFO: PersistentVolumeClaim pvc-mbsvx found and phase=Bound (18.02845ms)
Oct 16 15:16:36.334: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-s4sr2 to have phase Bound
Oct 16 15:16:36.352: INFO: PersistentVolumeClaim pvc-s4sr2 found and phase=Bound (17.921955ms)
STEP: Creating pod to attach PVs to the node
Oct 16 15:17:57.069: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-hrfpv --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:17:57.397: INFO: stderr: ""
Oct 16 15:17:57.397: INFO: stdout: ""
Oct 16 15:17:57.527: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-hrfpv --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:17:57.836: INFO: stderr: ""
Oct 16 15:17:57.836: INFO: stdout: ""
Oct 16 15:17:57.981: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-hrfpv --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:17:58.290: INFO: stderr: ""
Oct 16 15:17:58.290: INFO: stdout: ""
Oct 16 15:17:58.421: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vkgvj --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:17:58.755: INFO: stderr: ""
Oct 16 15:17:58.755: INFO: stdout: ""
Oct 16 15:17:58.884: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vkgvj --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:17:59.188: INFO: stderr: ""
Oct 16 15:17:59.188: INFO: stdout: ""
Oct 16 15:17:59.287: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vkgvj --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:17:59.602: INFO: stderr: ""
Oct 16 15:17:59.602: INFO: stdout: ""
Oct 16 15:17:59.721: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-wvnrg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:18:00.101: INFO: stderr: ""
Oct 16 15:18:00.101: INFO: stdout: ""
Oct 16 15:18:00.265: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-wvnrg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:18:00.611: INFO: stderr: ""
Oct 16 15:18:00.611: INFO: stdout: ""
Oct 16 15:18:00.720: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-wvnrg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:18:01.092: INFO: stderr: ""
Oct 16 15:18:01.092: INFO: stdout: ""
Oct 16 15:18:01.212: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vdb6s --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:18:01.589: INFO: stderr: ""
Oct 16 15:18:01.589: INFO: stdout: ""
Oct 16 15:18:01.694: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vdb6s --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:18:02.023: INFO: stderr: ""
Oct 16 15:18:02.023: INFO: stdout: ""
Oct 16 15:18:02.502: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vdb6s --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:18:02.805: INFO: stderr: ""
Oct 16 15:18:02.805: INFO: stdout: ""
STEP: Deleting pods
Oct 16 15:18:02.807: INFO: Deleting pod "pvc-tester-hrfpv" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:18:02.842: INFO: Wait up to 5m0s for pod "pvc-tester-hrfpv" to be fully deleted
Oct 16 15:18:42.875: INFO: Deleting pod "pvc-tester-vkgvj" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:18:42.913: INFO: Wait up to 5m0s for pod "pvc-tester-vkgvj" to be fully deleted
Oct 16 15:19:24.937: INFO: Deleting pod "pvc-tester-wvnrg" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:19:24.971: INFO: Wait up to 5m0s for pod "pvc-tester-wvnrg" to be fully deleted
Oct 16 15:19:56.990: INFO: Deleting pod "pvc-tester-vdb6s" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:19:57.025: INFO: Wait up to 5m0s for pod "pvc-tester-vdb6s" to be fully deleted
Oct 16 15:20:41.866: INFO: Volume are successfully detached from all the nodes: map[kubernetes-node4:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a1d277f-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a21e539-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a287a26-b2bf-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node1:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-99f9f244-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-99fe7a20-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-99fff232-b2bf-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node2:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a033865-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a0813e3-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a0a963e-b2bf-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node3:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a0f575d-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a12e997-b2bf-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-9a17cfa2-b2bf-11e7-aeb5-0050569c38f9.vmdk]]
STEP: Deleting the PVCs
Oct 16 15:20:41.872: INFO: Deleting PersistentVolumeClaim "pvc-l9tg4"
Oct 16 15:20:41.919: INFO: Deleting PersistentVolumeClaim "pvc-j9m85"
Oct 16 15:20:41.975: INFO: Deleting PersistentVolumeClaim "pvc-mc77p"
Oct 16 15:20:42.027: INFO: Deleting PersistentVolumeClaim "pvc-2j86v"
Oct 16 15:20:42.082: INFO: Deleting PersistentVolumeClaim "pvc-q7rsq"
Oct 16 15:20:42.147: INFO: Deleting PersistentVolumeClaim "pvc-qsh8l"
Oct 16 15:20:42.224: INFO: Deleting PersistentVolumeClaim "pvc-52pcj"
Oct 16 15:20:42.259: INFO: Deleting PersistentVolumeClaim "pvc-v5x89"
Oct 16 15:20:42.316: INFO: Deleting PersistentVolumeClaim "pvc-f9pnm"
Oct 16 15:20:42.369: INFO: Deleting PersistentVolumeClaim "pvc-m5fqt"
Oct 16 15:20:42.409: INFO: Deleting PersistentVolumeClaim "pvc-mbsvx"
Oct 16 15:20:42.448: INFO: Deleting PersistentVolumeClaim "pvc-s4sr2"
STEP: Creating 12 PVCs
Oct 16 15:20:42.807: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-85px8 to have phase Bound
Oct 16 15:20:42.832: INFO: PersistentVolumeClaim pvc-85px8 found but phase is Pending instead of Bound.
Oct 16 15:20:44.845: INFO: PersistentVolumeClaim pvc-85px8 found but phase is Pending instead of Bound.
Oct 16 15:20:46.943: INFO: PersistentVolumeClaim pvc-85px8 found and phase=Bound (4.13527333s)
Oct 16 15:20:47.032: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-npbn8 to have phase Bound
Oct 16 15:20:47.048: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:49.086: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:51.097: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:53.108: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:55.128: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:57.148: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:20:59.160: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:01.172: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:03.185: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:05.194: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:07.223: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:09.239: INFO: PersistentVolumeClaim pvc-npbn8 found but phase is Pending instead of Bound.
Oct 16 15:21:11.261: INFO: PersistentVolumeClaim pvc-npbn8 found and phase=Bound (24.228554172s)
Oct 16 15:21:11.285: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-ts6b8 to have phase Bound
Oct 16 15:21:11.298: INFO: PersistentVolumeClaim pvc-ts6b8 found and phase=Bound (12.795195ms)
Oct 16 15:21:11.325: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-hqb5d to have phase Bound
Oct 16 15:21:11.336: INFO: PersistentVolumeClaim pvc-hqb5d found and phase=Bound (11.085933ms)
Oct 16 15:21:11.359: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-pzlmw to have phase Bound
Oct 16 15:21:11.374: INFO: PersistentVolumeClaim pvc-pzlmw found and phase=Bound (14.757981ms)
Oct 16 15:21:11.400: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-4mljw to have phase Bound
Oct 16 15:21:11.426: INFO: PersistentVolumeClaim pvc-4mljw found and phase=Bound (25.6641ms)
Oct 16 15:21:11.450: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-mz5br to have phase Bound
Oct 16 15:21:11.462: INFO: PersistentVolumeClaim pvc-mz5br found and phase=Bound (11.515099ms)
Oct 16 15:21:11.492: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-7fk8x to have phase Bound
Oct 16 15:21:11.505: INFO: PersistentVolumeClaim pvc-7fk8x found and phase=Bound (13.387584ms)
Oct 16 15:21:11.530: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-cb2dp to have phase Bound
Oct 16 15:21:11.550: INFO: PersistentVolumeClaim pvc-cb2dp found and phase=Bound (19.152805ms)
Oct 16 15:21:11.584: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-85sqf to have phase Bound
Oct 16 15:21:11.599: INFO: PersistentVolumeClaim pvc-85sqf found and phase=Bound (14.406407ms)
Oct 16 15:21:11.632: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-8zdmg to have phase Bound
Oct 16 15:21:11.651: INFO: PersistentVolumeClaim pvc-8zdmg found and phase=Bound (18.063182ms)
Oct 16 15:21:11.683: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-nntqr to have phase Bound
Oct 16 15:21:11.694: INFO: PersistentVolumeClaim pvc-nntqr found and phase=Bound (10.97945ms)
STEP: Creating pod to attach PVs to the node
Oct 16 15:23:16.187: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-dpsht --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:23:16.646: INFO: stderr: ""
Oct 16 15:23:16.646: INFO: stdout: ""
Oct 16 15:23:16.755: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-dpsht --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:23:17.090: INFO: stderr: ""
Oct 16 15:23:17.090: INFO: stdout: ""
Oct 16 15:23:17.184: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-dpsht --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:23:17.509: INFO: stderr: ""
Oct 16 15:23:17.510: INFO: stdout: ""
Oct 16 15:23:17.606: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-kt8wp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:23:17.910: INFO: stderr: ""
Oct 16 15:23:17.910: INFO: stdout: ""
Oct 16 15:23:18.007: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-kt8wp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:23:18.324: INFO: stderr: ""
Oct 16 15:23:18.324: INFO: stdout: ""
Oct 16 15:23:18.417: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-kt8wp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:23:18.718: INFO: stderr: ""
Oct 16 15:23:18.719: INFO: stdout: ""
Oct 16 15:23:18.818: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-lckz2 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:23:19.137: INFO: stderr: ""
Oct 16 15:23:19.137: INFO: stdout: ""
Oct 16 15:23:19.244: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-lckz2 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:23:19.556: INFO: stderr: ""
Oct 16 15:23:19.556: INFO: stdout: ""
Oct 16 15:23:19.638: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-lckz2 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:23:19.961: INFO: stderr: ""
Oct 16 15:23:19.961: INFO: stdout: ""
Oct 16 15:23:20.060: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vrjxc --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:23:20.365: INFO: stderr: ""
Oct 16 15:23:20.365: INFO: stdout: ""
Oct 16 15:23:20.464: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vrjxc --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:23:20.837: INFO: stderr: ""
Oct 16 15:23:20.838: INFO: stdout: ""
Oct 16 15:23:20.948: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-vrjxc --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:23:21.258: INFO: stderr: ""
Oct 16 15:23:21.258: INFO: stdout: ""
STEP: Deleting pods
Oct 16 15:23:21.258: INFO: Deleting pod "pvc-tester-dpsht" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:23:21.299: INFO: Wait up to 5m0s for pod "pvc-tester-dpsht" to be fully deleted
Oct 16 15:24:03.361: INFO: Deleting pod "pvc-tester-kt8wp" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:24:03.397: INFO: Wait up to 5m0s for pod "pvc-tester-kt8wp" to be fully deleted
Oct 16 15:24:45.415: INFO: Deleting pod "pvc-tester-lckz2" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:24:45.452: INFO: Wait up to 5m0s for pod "pvc-tester-lckz2" to be fully deleted
Oct 16 15:25:23.476: INFO: Deleting pod "pvc-tester-vrjxc" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:25:23.510: INFO: Wait up to 5m0s for pod "pvc-tester-vrjxc" to be fully deleted
Oct 16 15:26:07.784: INFO: Volume are successfully detached from all the nodes: map[kubernetes-node3:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f7e96b8-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f825cec-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f8627c5-b2c0-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node4:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f89ca32-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f8cd95e-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f900995-b2c0-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node1:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f6a76ec-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f6d2d17-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f6f2a1a-b2c0-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node2:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f72bfae-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f760aab-b2c0-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-3f791671-b2c0-11e7-aeb5-0050569c38f9.vmdk]]
STEP: Deleting the PVCs
Oct 16 15:26:07.784: INFO: Deleting PersistentVolumeClaim "pvc-85px8"
Oct 16 15:26:07.854: INFO: Deleting PersistentVolumeClaim "pvc-npbn8"
Oct 16 15:26:07.900: INFO: Deleting PersistentVolumeClaim "pvc-ts6b8"
Oct 16 15:26:07.954: INFO: Deleting PersistentVolumeClaim "pvc-hqb5d"
Oct 16 15:26:08.003: INFO: Deleting PersistentVolumeClaim "pvc-pzlmw"
Oct 16 15:26:08.044: INFO: Deleting PersistentVolumeClaim "pvc-4mljw"
Oct 16 15:26:08.090: INFO: Deleting PersistentVolumeClaim "pvc-mz5br"
Oct 16 15:26:08.130: INFO: Deleting PersistentVolumeClaim "pvc-7fk8x"
Oct 16 15:26:08.183: INFO: Deleting PersistentVolumeClaim "pvc-cb2dp"
Oct 16 15:26:08.230: INFO: Deleting PersistentVolumeClaim "pvc-85sqf"
Oct 16 15:26:08.282: INFO: Deleting PersistentVolumeClaim "pvc-8zdmg"
Oct 16 15:26:08.337: INFO: Deleting PersistentVolumeClaim "pvc-nntqr"
STEP: Creating 12 PVCs
Oct 16 15:26:08.691: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-jwmql to have phase Bound
Oct 16 15:26:08.716: INFO: PersistentVolumeClaim pvc-jwmql found but phase is Pending instead of Bound.
Oct 16 15:26:10.732: INFO: PersistentVolumeClaim pvc-jwmql found but phase is Pending instead of Bound.
Oct 16 15:26:12.754: INFO: PersistentVolumeClaim pvc-jwmql found and phase=Bound (4.062803231s)
Oct 16 15:26:12.789: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-jhrg7 to have phase Bound
Oct 16 15:26:12.801: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:14.817: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:16.834: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:18.854: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:20.871: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:22.888: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:24.901: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:26.918: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:28.929: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:30.941: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:32.958: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:34.976: INFO: PersistentVolumeClaim pvc-jhrg7 found but phase is Pending instead of Bound.
Oct 16 15:26:37.013: INFO: PersistentVolumeClaim pvc-jhrg7 found and phase=Bound (24.222741938s)
Oct 16 15:26:37.042: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-lvvkl to have phase Bound
Oct 16 15:26:37.055: INFO: PersistentVolumeClaim pvc-lvvkl found and phase=Bound (12.935683ms)
Oct 16 15:26:37.078: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-bgkkc to have phase Bound
Oct 16 15:26:37.088: INFO: PersistentVolumeClaim pvc-bgkkc found and phase=Bound (9.861689ms)
Oct 16 15:26:37.109: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-qt2lv to have phase Bound
Oct 16 15:26:37.126: INFO: PersistentVolumeClaim pvc-qt2lv found and phase=Bound (17.393667ms)
Oct 16 15:26:37.147: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-pgs9s to have phase Bound
Oct 16 15:26:37.158: INFO: PersistentVolumeClaim pvc-pgs9s found but phase is Pending instead of Bound.
Oct 16 15:26:39.171: INFO: PersistentVolumeClaim pvc-pgs9s found and phase=Bound (2.023756794s)
Oct 16 15:26:39.217: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-8h942 to have phase Bound
Oct 16 15:26:39.249: INFO: PersistentVolumeClaim pvc-8h942 found and phase=Bound (32.347782ms)
Oct 16 15:26:39.282: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-phtvg to have phase Bound
Oct 16 15:26:39.296: INFO: PersistentVolumeClaim pvc-phtvg found and phase=Bound (13.940285ms)
Oct 16 15:26:39.321: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-ldv2f to have phase Bound
Oct 16 15:26:39.333: INFO: PersistentVolumeClaim pvc-ldv2f found and phase=Bound (11.888903ms)
Oct 16 15:26:39.360: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-4v9hf to have phase Bound
Oct 16 15:26:39.375: INFO: PersistentVolumeClaim pvc-4v9hf found and phase=Bound (14.230796ms)
Oct 16 15:26:39.403: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-jkfg5 to have phase Bound
Oct 16 15:26:39.419: INFO: PersistentVolumeClaim pvc-jkfg5 found and phase=Bound (15.47811ms)
Oct 16 15:26:39.449: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-87dwp to have phase Bound
Oct 16 15:26:39.463: INFO: PersistentVolumeClaim pvc-87dwp found and phase=Bound (13.680898ms)
STEP: Creating pod to attach PVs to the node
Oct 16 15:28:08.033: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-n68rp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:28:08.507: INFO: stderr: ""
Oct 16 15:28:08.507: INFO: stdout: ""
Oct 16 15:28:08.609: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-n68rp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:28:08.917: INFO: stderr: ""
Oct 16 15:28:08.917: INFO: stdout: ""
Oct 16 15:28:09.019: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-n68rp --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:28:09.342: INFO: stderr: ""
Oct 16 15:28:09.342: INFO: stdout: ""
Oct 16 15:28:09.432: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-qm7w8 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:28:09.760: INFO: stderr: ""
Oct 16 15:28:09.760: INFO: stdout: ""
Oct 16 15:28:09.847: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-qm7w8 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:28:10.164: INFO: stderr: ""
Oct 16 15:28:10.164: INFO: stdout: ""
Oct 16 15:28:10.259: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-qm7w8 --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:28:10.576: INFO: stderr: ""
Oct 16 15:28:10.576: INFO: stdout: ""
Oct 16 15:28:10.681: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-jslwg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:28:11.000: INFO: stderr: ""
Oct 16 15:28:11.000: INFO: stdout: ""
Oct 16 15:28:11.086: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-jslwg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:28:11.383: INFO: stderr: ""
Oct 16 15:28:11.383: INFO: stdout: ""
Oct 16 15:28:11.486: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-jslwg --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:28:11.782: INFO: stderr: ""
Oct 16 15:28:11.782: INFO: stdout: ""
Oct 16 15:28:11.888: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-mcqqq --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume1/emptyFile.txt'
Oct 16 15:28:12.207: INFO: stderr: ""
Oct 16 15:28:12.207: INFO: stdout: ""
Oct 16 15:28:12.315: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-mcqqq --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume2/emptyFile.txt'
Oct 16 15:28:12.634: INFO: stderr: ""
Oct 16 15:28:12.634: INFO: stdout: ""
Oct 16 15:28:12.778: INFO: Running '/Users/pshahzeb/k8s/kubernetes_2/_output/bin/kubectl --server=https://10.192.55.64 --kubeconfig=/tmp/kube199.json exec pvc-tester-mcqqq --namespace=e2e-tests-vcp-performance-lfrbk -- /bin/touch /mnt/volume3/emptyFile.txt'
Oct 16 15:28:13.113: INFO: stderr: ""
Oct 16 15:28:13.113: INFO: stdout: ""
STEP: Deleting pods
Oct 16 15:28:13.113: INFO: Deleting pod "pvc-tester-n68rp" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:28:13.157: INFO: Wait up to 5m0s for pod "pvc-tester-n68rp" to be fully deleted
Oct 16 15:28:53.195: INFO: Deleting pod "pvc-tester-qm7w8" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:28:53.224: INFO: Wait up to 5m0s for pod "pvc-tester-qm7w8" to be fully deleted
Oct 16 15:29:35.246: INFO: Deleting pod "pvc-tester-jslwg" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:29:35.279: INFO: Wait up to 5m0s for pod "pvc-tester-jslwg" to be fully deleted
Oct 16 15:30:07.312: INFO: Deleting pod "pvc-tester-mcqqq" in namespace "e2e-tests-vcp-performance-lfrbk"
Oct 16 15:30:07.357: INFO: Wait up to 5m0s for pod "pvc-tester-mcqqq" to be fully deleted
Oct 16 15:31:03.595: INFO: Volume are successfully detached from all the nodes: map[kubernetes-node1:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01aaa147-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01ae1953-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01b03dec-b2c1-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node2:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01b2ea3b-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01b76412-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01b8de3d-b2c1-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node3:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01bd6a83-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01c1b249-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01c53dd9-b2c1-11e7-aeb5-0050569c38f9.vmdk] kubernetes-node4:[[vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01c941ba-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01caec5e-b2c1-11e7-aeb5-0050569c38f9.vmdk [vsanDatastore] 1874c359-f300-a0cc-fd7e-02002a623c85/kubernetes-dynamic-pvc-01ce2be9-b2c1-11e7-aeb5-0050569c38f9.vmdk]]
STEP: Deleting the PVCs
Oct 16 15:31:03.595: INFO: Deleting PersistentVolumeClaim "pvc-jwmql"
Oct 16 15:31:03.641: INFO: Deleting PersistentVolumeClaim "pvc-jhrg7"
Oct 16 15:31:03.681: INFO: Deleting PersistentVolumeClaim "pvc-lvvkl"
Oct 16 15:31:03.724: INFO: Deleting PersistentVolumeClaim "pvc-bgkkc"
Oct 16 15:31:03.771: INFO: Deleting PersistentVolumeClaim "pvc-qt2lv"
Oct 16 15:31:03.833: INFO: Deleting PersistentVolumeClaim "pvc-pgs9s"
Oct 16 15:31:03.887: INFO: Deleting PersistentVolumeClaim "pvc-8h942"
Oct 16 15:31:04.047: INFO: Deleting PersistentVolumeClaim "pvc-phtvg"
Oct 16 15:31:04.089: INFO: Deleting PersistentVolumeClaim "pvc-ldv2f"
Oct 16 15:31:04.153: INFO: Deleting PersistentVolumeClaim "pvc-4v9hf"
Oct 16 15:31:04.211: INFO: Deleting PersistentVolumeClaim "pvc-jkfg5"
Oct 16 15:31:04.263: INFO: Deleting PersistentVolumeClaim "pvc-87dwp"
Oct 16 15:31:04.317: INFO: Average latency for below operations
Oct 16 15:31:04.317: INFO: Creating 12 PVCs and waiting for bound phase: 30576919 microseconds
Oct 16 15:31:04.317: INFO: Creating 4 Pod: 97668230 microseconds
Oct 16 15:31:04.317: INFO: Deleting 4 Pod and waiting for disk to be detached: 154930158 microseconds
Oct 16 15:31:04.317: INFO: Deleting 12 PVCs: 660074 microseconds
[AfterEach] [sig-storage] vcp-performance
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 16 15:31:04.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-vcp-performance-lfrbk" for this suite.
Oct 16 15:31:19.156: INFO: namespace: e2e-tests-vcp-performance-lfrbk, resource: bindings, ignored listing per whitelist
Oct 16 15:31:19.297: INFO: namespace e2e-tests-vcp-performance-lfrbk deletion completed in 14.690943637s
• [SLOW TEST:914.654 seconds]
[sig-storage] vcp-performance
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
vcp performance tests
/Users/pshahzeb/k8s/kubernetes_2/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_perf.go:99
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 16 15:31:19.305: INFO: Running AfterSuite actions on all node
Oct 16 15:31:19.305: INFO: Running AfterSuite actions on node 1
Ran 1 of 706 Specs in 914.851 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 705 Skipped PASS
Ginkgo ran 1 suite in 15m15.380170791s
Test Suite Passed
2017/10/16 15:31:19 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=vcp-performance' finished in 15m15.901302911s
2017/10/16 15:31:19 e2e.go:81: Done
```
</details>
```
None
```
- shouldn't scale up when expendable pod is created
- should scale up when non expendable pod is created
- shouldn't scale up when expendable pod is preempted
- should scale down when expendable pod is running
- shouldn't scale down when non expendable pod is running
In scalability testing influxdb was recently disabled, but we still
trying to execute corresponidng test, as a result it fails all the time.
Skip test if influxdb is disabled.
Automatic merge from submit-queue (batch tested with PRs 54773, 52523, 47497, 55356, 49429). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Deduplicate RC/RS controller code.
The code was already 99% similar between RC and RS. This is a wild idea to try to deduplicate the two controllers in a type-safe manner without adding tons of boilerplate, and without using code generation.
They are still separate resources and separate worker pools. This is a refactor that isn't intended to change any behavior.
```release-note
ReplicationController now shares its underlying controller implementation with ReplicaSet to reduce the maintenance burden going forward. However, they are still separate resources and there should be no externally visible effects from this change.
```
ref #49429
Automatic merge from submit-queue (batch tested with PRs 54773, 52523, 47497, 55356, 49429). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
don't check in mounter binary
```release-note
GCI mounter is moved from the manifests tarball to the server tarball.
```
Automatic merge from submit-queue (batch tested with PRs 54773, 52523, 47497, 55356, 49429). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add ephemeral storage e2e tests
Add e2e tests of limitrange/quota/downward_api for local ephemeral storage
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: part of #52463
**Special notes for your reviewer**:
Add e2e tests of limitrange/quota/downwardapi for local ephemeral storage
**Release note**:
```release-note
Add limitrange/resourcequota/downward_api e2e tests for local ephemeral storage
```
/assign @jingxu97
Automatic merge from submit-queue (batch tested with PRs 55301, 55319, 54018, 55322, 55125). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
E2E scale test for vSphere Cloud Provider Volume lifecycle operations
This PR adds an E2E test for vSphere Cloud Provider which will create/attach/detach/detach the volumes at scale with multiple threads based on user configurable values for number of volumes, volumes per pod and number of threads. (Since this is a scale test, number of threads would be low. This is only used to speed up the operation)
Test performs following tasks.
1. Create Storage Classes of 4 Categories (Default, SC with Non Default Datastore, SC with SPBM Policy, SC with VSAN Storage Capalibilies.)
2. Read VCP_SCALE_VOLUME_COUNT from System Environment.
3. Launch VCP_SCALE_INSTANCES go routines for creating VCP_SCALE_VOLUME_COUNT volumes. Each go routine is responsible for create/attach of VCP_SCALE_VOLUME_COUNT/VCP_SCALE_INSTANCES volumes.
4. Read VCP_SCALE_VOLUMES_PER_POD from System Environment. Each pod will be have VCP_SCALE_VOLUMES_PER_POD attached to it.
5. Once all the go routines are completed, we delete all the pods and volumes.
Which issue this PR fixes
fixes # vmware#291
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 55301, 55319, 54018, 55322, 55125). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add shyamjvs to test/OWNERS
I've been reviewing quite some PRs recently and have reviewed many in the past. + I have >80 commits in this code path (git log test | grep "shyamjvs@google.com") touching various parts including e2e/framework, utils, perftype, kubemark, e2e fixes from other SIGs (mostly in regard of scalability).
/cc @gmarek @spiffxp @krzyzacy @kubernetes/sig-testing-misc
Automatic merge from submit-queue (batch tested with PRs 53747, 54528, 55279, 55251, 55311). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adding e2e test to verify volume attach status after master kubelet restart
**What this PR does / why we need it**:
This PR adds test to verify volume remains attached after the kubelet is restarted on master node.
**Which issue this PR fixes** :
fixes vmware#274
**Special notes for your reviewer**:
This test does not run as part of existing sig-storage test grid. It has been tested internally at VMware.
Test logs
```
root@k8s-dev-vm-01:~/shahzeb/k8s/kubernetes# go run hack/e2e.go --check-version-skew=false -v -test --test_args='--ginkgo.focus=Volume\sAttach\sVerify'
flag provided but not defined: -check-version-skew
Usage of /tmp/go-build395888807/command-line-arguments/_obj/exe/e2e:
-get
go get -u kubetest if old or not installed (default true)
-old duration
Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/11 12:14:05 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/11 12:14:05 e2e.go:56: Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/11 12:14:05 e2e.go:57: The separator is required to use --get or --old flags
2017/10/11 12:14:05 e2e.go:58: The -- flag separator also suppresses this message
2017/10/11 12:14:05 e2e.go:151: The kubetest binary is older than 24h0m0s.
2017/10/11 12:14:05 e2e.go:156: Updating kubetest binary...
2017/10/11 12:14:13 e2e.go:77: Calling kubetest --check-version-skew=false -v -test --test_args=--ginkgo.focus=Volume\sAttach\sVerify...
2017/10/11 12:14:13 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/11 12:14:13 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 493.364761ms
2017/10/11 12:14:13 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17307+d274c30f81d1c2", GitCommit:"d274c30f81d1c2d966dc950014ac90f8fad140f7", GitTreeState:"clean", BuildDate:"2017-10-11T18:57:31Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.5", GitCommit:"490c6f13df1cb6612e0993c4c14f2ff90f8cdbf3", GitTreeState:"clean", BuildDate:"2017-06-14T20:03:38Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
2017/10/11 12:14:14 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 352.041653ms
2017/10/11 12:14:14 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=Volume\sAttach\sVerify
Conformance test: not doing test setup.
Oct 11 12:14:15.478: INFO: Overriding default scale value of zero to 1
Oct 11 12:14:15.478: INFO: Overriding default milliseconds value of zero to 5000
I1011 12:14:15.692022 29999 e2e.go:383] Starting e2e run "5f33ad5b-aeb8-11e7-9f17-0050569c27f6" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1507749254 - Will randomize all specs
Will run 1 of 709 specs
Oct 11 12:14:15.744: INFO: >>> kubeConfig: /tmp/kube204.json
Oct 11 12:14:15.751: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 11 12:14:15.861: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 11 12:14:16.067: INFO: 4 / 4 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 11 12:14:16.067: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Oct 11 12:14:16.077: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 11 12:14:16.077: INFO: Dumping network health container logs from all nodes...
Oct 11 12:14:16.083: INFO: Client version: v1.6.0-alpha.0.17307+d274c30f81d1c2
Oct 11 12:14:16.086: INFO: Server version: v1.6.5
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Volume Attach Verify [Feature:vsphere]
verify volume remains attached after master kubelet restart
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:144
[BeforeEach] [sig-storage] Volume Attach Verify [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 11 12:14:16.087: INFO: >>> kubeConfig: /tmp/kube204.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Attach Verify [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:81
Oct 11 12:14:16.265: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
[It] verify volume remains attached after master kubelet restart
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:144
STEP: Creating a test vsphere volume 0
STEP: Creating pod 0 on node kubernetes-node1
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk is attached to the pod kubernetes-node1
STEP: Creating a test vsphere volume 1
STEP: Creating pod 1 on node kubernetes-node2
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk is attached to the pod kubernetes-node2
STEP: Creating a test vsphere volume 2
STEP: Creating pod 2 on node kubernetes-node3
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk is attached to the pod kubernetes-node3
STEP: Creating a test vsphere volume 3
STEP: Creating pod 3 on node kubernetes-node4
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk is attached to the pod kubernetes-node4
STEP: Restarting kubelet on master node
Oct 11 12:16:12.239: INFO: Restarting kubelet via ssh on host 10.192.113.70:22 with command systemctl restart kubelet
STEP: Verifying the kubelet on master node is up
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: command: curl http://localhost:10255/healthz
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: stdout: ""
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: stderr: " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 10255: Connection refused\n"
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: exit code: 7
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk is attached to the pod kubernetes-node1
STEP: Deleting pod on node kubernetes-node1
Oct 11 12:16:18.538: INFO: Deleting pod "vsphere-e2e-pwjr1" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:16:18.559: INFO: Wait up to 5m0s for pod "vsphere-e2e-pwjr1" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk to be detached from the node kubernetes-node1
Oct 11 12:17:10.686: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk" appears to have successfully detached from "kubernetes-node1".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk is attached to the pod kubernetes-node2
STEP: Deleting pod on node kubernetes-node2
Oct 11 12:17:11.614: INFO: Deleting pod "vsphere-e2e-vqkbp" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:17:11.624: INFO: Wait up to 5m0s for pod "vsphere-e2e-vqkbp" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk to be detached from the node kubernetes-node2
Oct 11 12:17:55.748: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk" appears to have successfully detached from "kubernetes-node2".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk is attached to the pod kubernetes-node3
STEP: Deleting pod on node kubernetes-node3
Oct 11 12:17:56.051: INFO: Deleting pod "vsphere-e2e-fkrzb" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:17:56.069: INFO: Wait up to 5m0s for pod "vsphere-e2e-fkrzb" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk to be detached from the node kubernetes-node3
Oct 11 12:18:38.199: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk" appears to have successfully detached from "kubernetes-node3".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk is attached to the pod kubernetes-node4
STEP: Deleting pod on node kubernetes-node4
Oct 11 12:18:38.541: INFO: Deleting pod "vsphere-e2e-4cb0d" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:18:38.556: INFO: Wait up to 5m0s for pod "vsphere-e2e-4cb0d" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk to be detached from the node kubernetes-node4
Oct 11 12:19:22.672: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk" appears to have successfully detached from "kubernetes-node4".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk
[AfterEach] [sig-storage] Volume Attach Verify [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 11 12:19:23.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-restart-master-j9x0f" for this suite.
Oct 11 12:19:29.544: INFO: namespace: e2e-tests-restart-master-j9x0f, resource: bindings, ignored listing per whitelist
Oct 11 12:19:29.622: INFO: namespace e2e-tests-restart-master-j9x0f deletion completed in 6.156220683s
• [SLOW TEST:313.535 seconds]
[sig-storage] Volume Attach Verify [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
verify volume remains attached after master kubelet restart
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:144
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 11 12:19:29.666: INFO: Running AfterSuite actions on all node
Oct 11 12:19:29.666: INFO: Running AfterSuite actions on node 1
Ran 1 of 709 Specs in 313.923 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 708 Skipped PASS
```
Internally reviewed by VMware reviewers @divyenpatel @BaluDontu @tusharnt
**Release note**:
```
None
```
Automatic merge from submit-queue (batch tested with PRs 55331, 55272, 55228, 49763, 55242). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
use versiond group clients from client-go
**What this PR does / why we need it**:
Some **Deprecated** group clients are still used, replace them with versioned group clients.
**Which issue this PR fixes**: fixes#49760
**Special notes for your reviewer**:
/assign @caesarxuchao
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 54493, 52501, 55172, 54780, 54819). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add integration test for deployment rolling update, rollback, rollover
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: ref #52113
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Tolerate partial discovery in garbage collector
Allow the garbage collector to tolerate partial discovery failures. On a
partial failure, use whatever was discovered, log the failures, and
allow the resync logic to try again later.
Fixes#55022.
```release-note
API discovery failures no longer crash the kube controller manager via the garbage collector.
```
/cc @caesarxuchao
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Removes 'rwx' permissions for global users
- the tests make an assumption that the permissions on the /tmp dir have not
been altered
Signed-off-by: Brenda Chan <brchan@pivotal.io>
**What this PR does / why we need it**:
This PR modifies a conformance test that checks the file permissions when the`/tmp` dir is mounted.
The current tests make an assumption that the permissions on the `/tmp` dir on the host system has not been altered. We removed the check that global users need `rwx`, so the tests now only check for `dtrwxrwx`
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: N/A
**Special notes for your reviewer**: N/A
**Release note**:
```release-note
NONE
```
Allow the garbage collector to tolerate partial discovery failures. On a
partial failure, use whatever was discovered, log the failures, and
allow the resync logic to try again later.
Fixes#55022.
Automatic merge from submit-queue (batch tested with PRs 53592, 52562, 55175, 55213). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Refactor kube-scheduler config API, command, and server setup
Refactor the kube-scheduler configuration API, command setup, and server setup according to the guidelines established in #32215 and using the kube-proxy refactor (#34727) as a model of a well factored component adhering to said guidelines.
* Config API: clarify meaning and use of algorithm source by replacing modality derived from bools and string emptiness checks with an explicit AlgorithmSource type hierarchy.
* Config API: consolidate client connection config with common structs.
* Config API: split and simplify healthz/metrics server configuration.
* Config API: clarify leader election configuration.
* Config API: improve defaulting.
* CLI: deprecate all flags except `--config`.
* CLI: port all flags to new config API.
* CLI: refactor to match kube-proxy Cobra command style.
* Server: refactor away configurator.go to clarify application wiring.
* Server: refactor to more clearly separate wiring/setup from running.
Fixes https://github.com/kubernetes/kubernetes/issues/52428.
@kubernetes/api-reviewers
@kubernetes/sig-cluster-lifecycle-pr-reviews
@kubernetes/sig-scheduling-pr-reviews
/cc @ncdc @timothysc @bsalamat
```release-note
The kube-scheduler command now supports a `--config` flag which is the location of a file containing a serialized scheduler configuration. Most other kube-scheduler flags are now deprecated.
```
Automatic merge from submit-queue (batch tested with PRs 53592, 52562, 55175, 55213). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Check RegisterMetricAndTrackRateLimiterUsage error when starting BootstrapSigner & TokenCleaner controllers
**What this PR does / why we need it**:
Prevent `BootstrapSigner` and `TokenCleaner` controllers to start if `metrics.RegisterMetricAndTrackRateLimiterUsage` returns an error.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: complements #53571
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adds e2e tests for Node Autoprovisioning:
This PR adds e2e tests for Node Autoprovisioning: …
- should create new node if there is no node for node selector
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
delete if-else branch
Signed-off-by: yanxuean <yan.xuean@zte.com.cn>
**What this PR does / why we need it**:
The if-else branch is redundant.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Migrate pod relevant e2e tests to sig node
**What this PR does / why we need it**:
Migrate pod relevant e2e tests to sig-node.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Ref Umbrella issue #49161
**Special notes for your reviewer**:
**Release note**:
```release-note
none
```
Refactor the kube-scheduler configuration API, command setup, and server
setup according to the guidelines established in #32215 and using the
kube-proxy refactor (#34727) as a model of a well factored component
adhering to said guidelines.
* Config API: clarify meaning and use of algorithm source by replacing
modality derived from bools and string emptiness checks with an explicit
AlgorithmSource type hierarchy.
* Config API: consolidate client connection config with common structs.
* Config API: split and simplify healthz/metrics server configuration.
* Config API: clarify leader election configuration.
* Config API: improve defaulting.
* CLI: deprecate all flags except `--config`.
* CLI: port all flags to new config API.
* CLI: refactor to match kube-proxy Cobra command style.
* Server: refactor away configurator.go to clarify application wiring.
* Server: refactor to more clearly separate wiring/setup from running.
Fixes#52428.
Automatic merge from submit-queue (batch tested with PRs 55061, 55157, 55231). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove unused constant
**What this PR does / why we need it**:
These constant is never used so we can remove it safely.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55061, 55157, 55231). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Adds e2e tests for Node Autoprovisioning:
Adds e2e tests for Node Autoprovisioning:
- shouldn't add new node group if not needed
- shouldn't scale up if cores limit too low, should scale up after limit is changed
Automatic merge from submit-queue (batch tested with PRs 55114, 52976, 54871, 55122, 55140). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Don't share nodePort service in session affinity tests
**What this PR does / why we need it**:
From https://github.com/kubernetes/kubernetes/issues/54524, https://github.com/kubernetes/kubernetes/issues/54571.
Spent sometime to dig into it today, found this test is flaky mostly because it sends out service requests before kube-proxy reacts on the service session affinity update, hence multiple endpoints are responding instead of one. It is more flaky in alpha CIs probably due to different test sequences.
This PR creates a separate service with `sessionAffinity=ClientIP` so there wouldn't be a race between test begins and kube-proxy reacts. On the other hand, it also seems inappropriate to tweak the`config.NodePortService`, which is shared by other networking tests.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes # (will mark them fixed later).
**Special notes for your reviewer**:
/assign @m1093782566 @bowei
cc @spiffxp
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53645, 54734, 54586, 55015, 54688). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
e2e-node:the value of bestEffortCgroup is wrong
Signed-off-by: yanxuean <yan.xuean@zte.com.cn>
**What this PR does / why we need it**:
The value of bestEffortCgroup is wrong in e2e-node. The test case is invalid actually.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 53645, 54734, 54586, 55015, 54688). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix Incorrect Scale Subresources and HPA e2e ScaleTargetRefs
The HPA e2es failed to actually set `apiVersion` on the created HPAs, which previous was ignored. Since the polymorphic scale client was merged, this behavior is no longer tolerated (it was never correct to begin with, but it accidentally worked).
Additionally, the `apps` resources have their own version of scale. Until `apps/v1beta1` and `apps/v1beta2` go away, we need to support those versions in the scale client.
Together, these broke some of the HPA e2es.
Fixes#54574
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
move KubeProxyConfiguration out of componentconfig API group
**What this PR does / why we need it**:
move KubeProxyConfiguration out of componentconfig API group
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#53577
**Special notes for your reviewer**:
/cc @thockin @ncdc
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 55034, 55068). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Clarify what each "version" means.
Some folks were getting confused by this output.
Fixes#54821
```release-note
NONE
```
/area conformance
/sig architecture
/assign @timothysc @WilliamDenniss
This commit fixes an issue where in clusters which have FQDN as the node names,
one of the scheduling predicates tests will fail because it will try and run a
pod with a name that violates DNS-1123 rules. As an example, one such pod name
could look like "filler-pod-kube-node-0.kubelet.mesos".
Signed-off-by: Paulo Pires <pjpires@gmail.com>