Commit Graph

1305 Commits (b0dd166fa300e5e2181b344d150b7706abb932aa)

Author SHA1 Message Date
Mikkel Oscar Lyderik Larsen 3836857229
e2e: Only create PSP if RBAC is enabled
Using PSP in e2e tests depend on RBAC being enabled in the cluster and
thus PSP should only be used when RBAC is.
2017-11-27 01:11:38 +01:00
supereagle 032416c75d use core client with explicit version
fix more usage of deprecated core client
2017-11-25 08:14:10 +08:00
supereagle 9c02d7e38c use extensions client with explicit version 2017-11-22 21:18:14 +08:00
Kubernetes Submit Queue 61792ef482
Merge pull request #55786 from porridge/debug-e2e-pod-wait
Automatic merge from submit-queue (batch tested with PRs 54316, 53400, 55933, 55786, 55794). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Improve messages around waiting for pods.

**What this PR does / why we need it**:

This is a step towards solving #55785

**Release note**:
```release-note
NONE
```
2017-11-21 15:04:31 -08:00
Kubernetes Submit Queue d4724d7e43
Merge pull request #55056 from porridge/typo-percentil
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix a typo.

**Release note**:
```release-note
NONE
```
2017-11-20 01:40:50 -08:00
andyzhangx 310168c1d2 fix CreateVolume: search mode for Dedicated kind 2017-11-19 11:16:50 +00:00
Kubernetes Submit Queue 87d45a54bd
Merge pull request #55940 from shyamjvs/reduce-spam-from-resource-gatherer
Automatic merge from submit-queue (batch tested with PRs 55233, 55927, 55903, 54867, 55940). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Control logs verbosity in resource gatherer

PR https://github.com/kubernetes/kubernetes/pull/53541 added some logging in resource gatherer which is a bit too verbose for normal purposes.
As a result, we're seeing a lot of spam in our large cluster performance tests (e.g - https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-scalability/8046/build-log.txt)

This PR is making the verbosity of those logs controllable through an option. It's off by default, but turning it on for the gpu test to preserve behavior there.

/cc @jiayingz @mindprince
2017-11-18 12:26:18 -08:00
Kubernetes Submit Queue 2d972c19bf
Merge pull request #55737 from mindprince/update-nvidia-urls
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Update URLs for nvidia gpu device plugin and nvidia driver installer.

Device plugin is now an addon and its manifest is now in kubernetes/kubernetes. The manifest on
GoogleCloudPlatform/container-engine-accelerators no longer contains device plugin.

This is needed after https://github.com/kubernetes/kubernetes/pull/54826 and https://github.com/GoogleCloudPlatform/container-engine-accelerators/pull/25

**Release note**:
```release-note
NONE
```

/sig scheduling
2017-11-18 09:36:05 -08:00
Shyam Jeedigunta fce28995e1 Control logs verbosity in resource gatherer 2017-11-17 13:03:32 +01:00
Kubernetes Submit Queue ebd3d68039
Merge pull request #55831 from Random-Liu/rename-log-dump-env
Automatic merge from submit-queue (batch tested with PRs 55392, 55491, 51914, 55831, 55836). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Rename log-dump env to `LOG_DUMP_SYSTEMD_SERVICES`.

For https://github.com/kubernetes/features/issues/286.

Rename `SYSTEMD_SERVICES` to `LOG_DUMP_SYSTEMD_SERVICES`. test-infra disables log dump in our e2e framework, and uses a different log dump logic https://github.com/kubernetes/test-infra/blob/master/kubetest/e2e.go#L480-L497. So the flags we added in https://github.com/kubernetes/kubernetes/pull/55288 will not work in test-infra.

Fortrunately, test-infra is using the same script `cluster/log-dump/log-dump.sh`, so we could still configure systemd services by setting the environment variable globally.

The original environment variable name is too general for setting globally, change it to a more specific name.

**Release note**:

```release-note
none
```
2017-11-17 00:18:25 -08:00
Shyam Jeedigunta 1ae56bbe2b Set resource-gathering and probe-duration period for kubemark 2017-11-16 12:02:56 +01:00
Lantao Liu e504e5a316 Wait for server resources. 2017-11-16 01:38:35 +00:00
Lantao Liu 0085e2208d Rename log-dump env to `LOG_DUMP_SYSTEMD_SERVICES`. 2017-11-16 00:41:27 +00:00
Kubernetes Submit Queue ded83878c1
Merge pull request #55820 from shyamjvs/restore-resource-gatherer-pollperiod-default
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Restore default polling period of resource-gatherer

Fixes https://github.com/kubernetes/kubernetes/issues/55818

/cc @jiayingz @mindprince
2017-11-15 16:03:06 -08:00
Shyam Jeedigunta a350825612 Restore default polling period of resource-gatherer 2017-11-15 23:15:28 +01:00
Kubernetes Submit Queue cbdd18eee9
Merge pull request #55484 from bskiba/multizone-size-e2e
Automatic merge from submit-queue (batch tested with PRs 54436, 53148, 55153, 55614, 55484). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Support multizone clusters in GCE and GKE e2e tests

**What this PR does / why we need it**:
For multi-zone clusters we can't rely on zone parameter for fetching information on Instance Groups. Instead we first fetch the zone the group is in to use in subsequent calls.

Note that current version of the code does not work for multi zone clusters at all.

**Release note**:
```
NONE
```
2017-11-15 12:58:11 -08:00
Marcin Owsiany 9b6590e7ae Improve messages around waiting for pods. 2017-11-15 11:29:52 +01:00
Rohit Agarwal 3ac94a57eb Update URLs for nvidia gpu device plugin and nvidia driver installer.
Device plugin is now an addon and its manifest is now in
kubernetes/kubernetes. The manifest on
GoogleCloudPlatform/container-engine-accelerators no longer contains
device plugin.
2017-11-14 15:31:22 -08:00
Dane LeBlanc 2827b7ffb7 Add brackets around IPv6 addrs in e2e test IP:port endpoints
There are several locations in the e2e tests where endpoints of the
form IP:port use IPv6 addresses directly, without surrounding brackets.
Brackets are required around IPv6 addresses in this case, in order to
distinguish the colons in the IPv6 address from the colon immediately
preceding the port.

Also, wherever the curl command might be used with an IPv6 address
surrounded in brackets, the "-g" argument is added to the curl
command line arguments so that the brackets can be interpreted
correctly.

fixes #52746
2017-11-14 10:55:09 -05:00
Kubernetes Submit Queue ea66c00522
Merge pull request #54509 from vmware/node_poweroff_test
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

E2E test to verify pod failover during node power-off

**What this PR does / why we need it**:

This PR adds test to verify volume status after the node where the pod got provisioned being powered off and failed over to a different node.

Test performs following tasks:

1. Create a StorageClass
2. Create a PVC with the StorageClass
3. Create a Deployment with 1 replica, using the PVC
4. Verify the pod got provisioned on a node
5. Verify the volume is attached to the node
6. Power off the node where pod got provisioned
7. Verify the pod got provisioned on a different node
8. Verify the volume is attached to the new node
9. Verify the volume is detached from the previous node
10. Power on the previous node
11. Delete the Deployment
12. Delete the PVC
13. Delete the StorageClass

**Which issue this PR fixes**:

Fixes https://github.com/vmware/kubernetes/issues/272

**Special notes for your reviewer**:

Test logs:
```
# go run hack/e2e.go --check-version-skew=false --v --test --test_args='--ginkgo.focus=Node\sPoweroff'
flag provided but not defined: -check-version-skew
Usage of /tmp/go-build212295472/command-line-arguments/_obj/exe/e2e:
  -get
                go get -u kubetest if old or not installed (default true)
  -old duration
                Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/24 11:48:28 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/24 11:48:28 e2e.go:56:   Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/24 11:48:28 e2e.go:57:   The separator is required to use --get or --old flags
2017/10/24 11:48:28 e2e.go:58:   The -- flag separator also suppresses this message
2017/10/24 11:48:28 e2e.go:77: Calling kubetest --check-version-skew=false --v --test --test_args=--ginkgo.focus=Node\sPoweroff...
2017/10/24 11:48:28 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/24 11:48:28 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 350.700421ms
2017/10/24 11:48:28 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1627+54fc02df4a3a2a", GitCommit:"54fc02df4a3a2a12e14fb72d84a1aaa658ba6689", GitTreeState:"clean", BuildDate:"2017-10-24T18:33:37Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.0-alpha.1.1437+ba66fcb63de9e9", GitCommit:"ba66fcb63de9e9b72e2ccf8b823df33a22df0522", GitTreeState:"clean", BuildDate:"2017-10-20T07:16:05Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}
2017/10/24 11:48:28 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 315.334518ms
2017/10/24 11:48:28 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=Node\sPoweroff
Conformance test: not doing test setup.
Oct 24 11:48:30.391: INFO: Overriding default scale value of zero to 1
Oct 24 11:48:30.391: INFO: Overriding default milliseconds value of zero to 5000
I1024 11:48:30.637436     409 e2e.go:378] Starting e2e run "ed9fdfc7-b8eb-11e7-a595-0050569c26b8" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1508870909 - Will randomize all specs
Will run 1 of 717 specs
 
Oct 24 11:48:30.678: INFO: >>> kubeConfig: /root/.kube/config
Oct 24 11:48:30.685: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 24 11:48:30.719: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 24 11:48:30.857: INFO: 17 / 17 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 24 11:48:30.857: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready.
Oct 24 11:48:30.863: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 24 11:48:30.863: INFO: Dumping network health container logs from all nodes...
Oct 24 11:48:30.877: INFO: Client version: v1.9.0-alpha.1.1627+54fc02df4a3a2a
Oct 24 11:48:30.879: INFO: Server version: v1.9.0-alpha.1.1437+ba66fcb63de9e9
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
  verify volume status after node power off
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:149
[BeforeEach] [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 24 11:48:30.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:64
Oct 24 11:48:30.984: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
[It] verify volume status after node power off
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:149
STEP: Creating a Storage Class
STEP: Creating PVC using the Storage Class
STEP: Waiting for PVC to be in bound phase
Oct 24 11:48:31.141: INFO: Waiting up to 5m0s for PersistentVolumeClaim pvc-zxz56 to have phase Bound
Oct 24 11:48:31.150: INFO: PersistentVolumeClaim pvc-zxz56 found but phase is Pending instead of Bound.
Oct 24 11:48:33.155: INFO: PersistentVolumeClaim pvc-zxz56 found and phase=Bound (2.013403698s)
STEP: Creating a Deployment
I1024 11:48:33.180161     409 deployment_util.go:254] Waiting deployment "deployment-ef6b820e-b8eb-11e7-a595-0050569c26b8" to complete
Oct 24 11:48:33.192: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1beta1.DeploymentCondition(nil), CollisionCount:(*int32)(nil)}
Oct 24 11:48:35.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:37.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:39.196: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:41.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:43.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:45.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:47.198: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:49.198: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:51.196: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
Oct 24 11:48:53.197: INFO: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63644467713, nsec:0, loc:(*time.Location)(0x5db10c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}, CollisionCount:(*int32)(nil)}
STEP: Get pod from the deployement
STEP: Verify disk is attached to the node: kubernetes-node5
STEP: Power off the node: kubernetes-node5
Oct 24 11:49:07.337: INFO: Waiting for pod to be failed over from "kubernetes-node5"
Oct 24 11:49:17.336: INFO: Waiting for pod to be failed over from "kubernetes-node5"
Oct 24 11:49:27.340: INFO: Waiting for pod to be failed over from "kubernetes-node5"
Oct 24 11:49:37.340: INFO: The pod has been failed over from "kubernetes-node5" to "kubernetes-node7"
STEP: Waiting for disk to be attached to the new node: kubernetes-node7
Oct 24 11:49:47.534: INFO: Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" has successfully attached to "kubernetes-node7".
STEP: Waiting for disk to be detached from the previous node: kubernetes-node5
Oct 24 11:49:57.707: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:07.702: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:17.710: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:27.733: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:37.713: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:47.723: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:50:57.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:07.710: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:17.719: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:27.716: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:37.717: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:47.712: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:51:57.707: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:07.724: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:17.716: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:27.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:37.716: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:47.709: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:52:57.714: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:07.715: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:17.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:27.714: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:37.713: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:47.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:53:57.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:07.712: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:17.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:27.712: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:37.707: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:47.698: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:54:57.705: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:07.711: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:17.699: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:27.702: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:37.704: INFO: Waiting for Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" to detach from "kubernetes-node5".
Oct 24 11:55:47.703: INFO: Volume "[vsanDatastore] 165fee59-08d0-b42e-e5c4-020047ab7bb1/kubernetes-dynamic-pvc-ee347cf2-b8eb-11e7-8558-005056a2ed7b.vmdk" has successfully detached from "kubernetes-node5".
STEP: Power on the previous node: kubernetes-node5
Oct 24 11:55:49.168: INFO: Deleting PersistentVolumeClaim "pvc-zxz56"
[AfterEach] [sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 24 11:55:49.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-node-poweroff-l245b" for this suite.
Oct 24 11:55:57.630: INFO: namespace: e2e-tests-node-poweroff-l245b, resource: bindings, ignored listing per whitelist
Oct 24 11:55:57.643: INFO: namespace e2e-tests-node-poweroff-l245b deletion completed in 8.379395732s
 
• [SLOW TEST:446.758 seconds]
[sig-storage] Node Poweroff [Feature:vsphere] [Slow] [Disruptive]
/root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
  verify volume status after node power off
  /root/divyenp/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_node_poweroff.go:149
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 24 11:55:57.647: INFO: Running AfterSuite actions on all node
Oct 24 11:55:57.647: INFO: Running AfterSuite actions on node 1
 
Ran 1 of 717 Specs in 446.969 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 716 Skipped PASS
 
Ginkgo ran 1 suite in 7m27.797177022s
Test Suite Passed
2017/10/24 11:55:57 util.go:156: Step './hack/ginkgo-e2e.sh --ginkgo.focus=Node\sPoweroff' finished in 7m28.760818768s
2017/10/24 11:55:57 e2e.go:81: Done
```
VMware Reviewers: @divyenpatel @pshahzeb 

**Release note**:

```release-note
NONE
```
2017-11-14 00:56:26 -08:00
Shaomin Chen 3db4f2b843 E2E test to verify pod failover during node power-off 2017-11-13 21:52:54 -08:00
Jiaying Zhang ae36f8ee95 Extend test/e2e/scheduling/nvidia-gpus.go to track resource usage of
installer and device plugin containers.
To support this, exports certain functions and fields in
framework/resource_usage_gatherer.go so that it can be used in any
e2e test to track any specified pod resource usage with the specified
probe interval and duration.
2017-11-13 16:24:41 -08:00
Kubernetes Submit Queue 74ec8d0fe8
Merge pull request #55288 from Random-Liu/e2e-log-for-alternative-runtime
Automatic merge from submit-queue (batch tested with PRs 55283, 55461, 55288, 53970, 55487). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Support collecting log for alternative container runtime in e2e test.

Fixes https://github.com/kubernetes/kubernetes/issues/55629.

Add support to collect logs for alternative container runtime in e2e.
Example for `cri-containerd`:
```
$ go run hack/e2e.go -- --test -v --test_args="--report-dir=$PWD --container-runtime-services=cri-containerd,containerd,cri-containerd-installation"
```

```release-note
none
```

/cc @kubernetes/sig-node-pr-reviews @kubernetes/sig-testing-pr-reviews
2017-11-13 12:32:24 -08:00
Kubernetes Submit Queue fe599c7dcf
Merge pull request #54992 from porridge/perf-timing
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add performance test phase timing export.

**What this PR does / why we need it**:

First step totwards allowing us to get a quick overview of test length
via perf-dash.k8s.io.

**Release note**:
```release-note
NONE
```

@kubernetes/sig-scalability-feature-requests
2017-11-11 01:39:30 -08:00
Lantao Liu 32c4295bcf Support collecting log for alternative container runtime in e2e test. 2017-11-10 18:46:48 +00:00
Beata Skiba def49db058 Support multizone clusters in GCE and GKE e2e tests 2017-11-10 15:29:15 +01:00
Marian Lobur ba313796f1 Fix influxdb e2e test failure.
In scalability testing influxdb was recently disabled, but we still
trying to execute corresponidng test, as a result it fails all the time.
Skip test if influxdb is disabled.
2017-11-10 09:16:45 +01:00
Marcin Owsiany 96089e0b79 Add performance test phase timing export. 2017-11-09 17:12:15 +01:00
Dr. Stefan Schimanski bec617f3cc Update generated files 2017-11-09 12:14:08 +01:00
Dr. Stefan Schimanski 012b085ac8 pkg/apis/core: mechanical import fixes in dependencies 2017-11-09 12:14:08 +01:00
Kubernetes Submit Queue b616dff2e6
Merge pull request #52523 from NickrenREN/ephemeral-storage-e2e
Automatic merge from submit-queue (batch tested with PRs 54773, 52523, 47497, 55356, 49429). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add ephemeral storage e2e tests

Add e2e tests of limitrange/quota/downward_api for local ephemeral storage

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: part of #52463

**Special notes for your reviewer**:
Add e2e tests of limitrange/quota/downwardapi for local ephemeral storage

**Release note**:
```release-note
Add limitrange/resourcequota/downward_api  e2e tests for local ephemeral storage
```

/assign @jingxu97
2017-11-08 22:11:49 -08:00
Kubernetes Submit Queue a8b355cf92
Merge pull request #55352 from shyamjvs/remove-cluster-ip-range-hack
Automatic merge from submit-queue (batch tested with PRs 55092, 55348, 55095, 55277, 55352). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Remove hack for CLUSTER_IP_RANGE in e2e framework no longer needed

As discussed in https://github.com/kubernetes/test-infra/pull/5386#discussion_r149523537, we no longer need it as we're using the flag to pass the value.

/cc @krzyzacy
2017-11-08 21:18:30 -08:00
Kubernetes Submit Queue 1c9d6f53af
Merge pull request #54018 from vmware/vSphereScaleTest
Automatic merge from submit-queue (batch tested with PRs 55301, 55319, 54018, 55322, 55125). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

E2E scale test for vSphere Cloud Provider Volume lifecycle operations

This PR adds an E2E test for vSphere Cloud Provider which will create/attach/detach/detach the volumes at scale with multiple threads based on user configurable values for number of volumes, volumes per pod and number of threads. (Since this is a scale test, number of threads would be low. This is only used to speed up the operation)

Test performs following tasks.

1. Create Storage Classes of 4 Categories (Default, SC with Non Default Datastore, SC with SPBM Policy, SC with VSAN Storage Capalibilies.)
2. Read VCP_SCALE_VOLUME_COUNT from System Environment.
3. Launch VCP_SCALE_INSTANCES go routines for creating VCP_SCALE_VOLUME_COUNT volumes. Each go routine is responsible for create/attach of VCP_SCALE_VOLUME_COUNT/VCP_SCALE_INSTANCES volumes.
4. Read VCP_SCALE_VOLUMES_PER_POD from System Environment. Each pod will be have VCP_SCALE_VOLUMES_PER_POD attached to it.
5. Once all the go routines are completed, we delete all the pods and volumes.

Which issue this PR fixes
fixes # vmware#291

```release-note
None
```
2017-11-08 20:23:28 -08:00
Kubernetes Submit Queue 8fee4d1d9b
Merge pull request #53747 from vmware/bulkverify_test
Automatic merge from submit-queue (batch tested with PRs 53747, 54528, 55279, 55251, 55311). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Adding e2e test to verify volume attach status after master kubelet restart

**What this PR does / why we need it**:
This PR adds test to verify volume remains attached after the kubelet is restarted on master node.

**Which issue this PR fixes** : 
fixes vmware#274

**Special notes for your reviewer**:
This test does not run as part of existing sig-storage test grid. It has been tested internally at VMware.
Test logs
```
root@k8s-dev-vm-01:~/shahzeb/k8s/kubernetes# go run hack/e2e.go --check-version-skew=false -v -test --test_args='--ginkgo.focus=Volume\sAttach\sVerify'
flag provided but not defined: -check-version-skew
Usage of /tmp/go-build395888807/command-line-arguments/_obj/exe/e2e:
  -get
    	go get -u kubetest if old or not installed (default true)
  -old duration
    	Consider kubetest old if it exceeds this (default 24h0m0s)
2017/10/11 12:14:05 e2e.go:55: NOTICE: go run hack/e2e.go is now a shim for test-infra/kubetest
2017/10/11 12:14:05 e2e.go:56:   Usage: go run hack/e2e.go [--get=true] [--old=24h0m0s] -- [KUBETEST_ARGS]
2017/10/11 12:14:05 e2e.go:57:   The separator is required to use --get or --old flags
2017/10/11 12:14:05 e2e.go:58:   The -- flag separator also suppresses this message
2017/10/11 12:14:05 e2e.go:151: The kubetest binary is older than 24h0m0s.
2017/10/11 12:14:05 e2e.go:156: Updating kubetest binary...
2017/10/11 12:14:13 e2e.go:77: Calling kubetest --check-version-skew=false -v -test --test_args=--ginkgo.focus=Volume\sAttach\sVerify...
2017/10/11 12:14:13 util.go:154: Running: ./cluster/kubectl.sh --match-server-version=false version
2017/10/11 12:14:13 util.go:156: Step './cluster/kubectl.sh --match-server-version=false version' finished in 493.364761ms
2017/10/11 12:14:13 util.go:154: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.17307+d274c30f81d1c2", GitCommit:"d274c30f81d1c2d966dc950014ac90f8fad140f7", GitTreeState:"clean", BuildDate:"2017-10-11T18:57:31Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.5", GitCommit:"490c6f13df1cb6612e0993c4c14f2ff90f8cdbf3", GitTreeState:"clean", BuildDate:"2017-06-14T20:03:38Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
2017/10/11 12:14:14 util.go:156: Step './hack/e2e-internal/e2e-status.sh' finished in 352.041653ms
2017/10/11 12:14:14 util.go:154: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=Volume\sAttach\sVerify
Conformance test: not doing test setup.
Oct 11 12:14:15.478: INFO: Overriding default scale value of zero to 1
Oct 11 12:14:15.478: INFO: Overriding default milliseconds value of zero to 5000
I1011 12:14:15.692022   29999 e2e.go:383] Starting e2e run "5f33ad5b-aeb8-11e7-9f17-0050569c27f6" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1507749254 - Will randomize all specs
Will run 1 of 709 specs

Oct 11 12:14:15.744: INFO: >>> kubeConfig: /tmp/kube204.json
Oct 11 12:14:15.751: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
Oct 11 12:14:15.861: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Oct 11 12:14:16.067: INFO: 4 / 4 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Oct 11 12:14:16.067: INFO: expected 0 pod replicas in namespace 'kube-system', 0 are Running and Ready.
Oct 11 12:14:16.077: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Oct 11 12:14:16.077: INFO: Dumping network health container logs from all nodes...
Oct 11 12:14:16.083: INFO: Client version: v1.6.0-alpha.0.17307+d274c30f81d1c2
Oct 11 12:14:16.086: INFO: Server version: v1.6.5
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Volume Attach Verify [Feature:vsphere]
  verify volume remains attached after master kubelet restart
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:144
[BeforeEach] [sig-storage] Volume Attach Verify [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
STEP: Creating a kubernetes client
Oct 11 12:14:16.087: INFO: >>> kubeConfig: /tmp/kube204.json
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume Attach Verify [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:81
Oct 11 12:14:16.265: INFO: Waiting up to 4h0m0s for all (but 0) nodes to be schedulable
[It] verify volume remains attached after master kubelet restart
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:144
STEP: Creating a test vsphere volume 0
STEP: Creating pod 0 on node kubernetes-node1
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk is attached to the pod kubernetes-node1
STEP: Creating a test vsphere volume 1
STEP: Creating pod 1 on node kubernetes-node2
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk is attached to the pod kubernetes-node2
STEP: Creating a test vsphere volume 2
STEP: Creating pod 2 on node kubernetes-node3
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk is attached to the pod kubernetes-node3
STEP: Creating a test vsphere volume 3
STEP: Creating pod 3 on node kubernetes-node4
STEP: Waiting for pod to be ready
STEP: Verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk is attached to the pod kubernetes-node4
STEP: Restarting kubelet on master node
Oct 11 12:16:12.239: INFO: Restarting kubelet via ssh on host 10.192.113.70:22 with command systemctl restart kubelet
STEP: Verifying the kubelet on master node is up
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: command:   curl http://localhost:10255/healthz
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: stdout:    ""
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: stderr:    "  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                 Dload  Upload   Total   Spent    Left  Speed\n\r  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (7) Failed to connect to localhost port 10255: Connection refused\n"
Oct 11 12:16:13.318: INFO: ssh root@10.192.113.70:22: exit code: 7
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk is attached to the pod kubernetes-node1
STEP: Deleting pod on node kubernetes-node1
Oct 11 12:16:18.538: INFO: Deleting pod "vsphere-e2e-pwjr1" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:16:18.559: INFO: Wait up to 5m0s for pod "vsphere-e2e-pwjr1" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk to be detached from the node kubernetes-node1
Oct 11 12:17:10.686: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk" appears to have successfully detached from "kubernetes-node1".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749256431387056.vmdk
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk is attached to the pod kubernetes-node2
STEP: Deleting pod on node kubernetes-node2
Oct 11 12:17:11.614: INFO: Deleting pod "vsphere-e2e-vqkbp" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:17:11.624: INFO: Wait up to 5m0s for pod "vsphere-e2e-vqkbp" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk to be detached from the node kubernetes-node2
Oct 11 12:17:55.748: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk" appears to have successfully detached from "kubernetes-node2".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749281940603428.vmdk
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk is attached to the pod kubernetes-node3
STEP: Deleting pod on node kubernetes-node3
Oct 11 12:17:56.051: INFO: Deleting pod "vsphere-e2e-fkrzb" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:17:56.069: INFO: Wait up to 5m0s for pod "vsphere-e2e-fkrzb" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk to be detached from the node kubernetes-node3
Oct 11 12:18:38.199: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk" appears to have successfully detached from "kubernetes-node3".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749305162880964.vmdk
STEP: After master restart, verify volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk is attached to the pod kubernetes-node4
STEP: Deleting pod on node kubernetes-node4
Oct 11 12:18:38.541: INFO: Deleting pod "vsphere-e2e-4cb0d" in namespace "e2e-tests-restart-master-j9x0f"
Oct 11 12:18:38.556: INFO: Wait up to 5m0s for pod "vsphere-e2e-4cb0d" to be fully deleted
STEP: Waiting for volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk to be detached from the node kubernetes-node4
Oct 11 12:19:22.672: INFO: Volume "[vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk" appears to have successfully detached from "kubernetes-node4".
STEP: Deleting volume [vsanDatastore] 8c95d659-46fa-b9a6-5e19-02002f28e688/e2e-vmdk-1507749330788801099.vmdk
[AfterEach] [sig-storage] Volume Attach Verify [Feature:vsphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Oct 11 12:19:23.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-restart-master-j9x0f" for this suite.
Oct 11 12:19:29.544: INFO: namespace: e2e-tests-restart-master-j9x0f, resource: bindings, ignored listing per whitelist
Oct 11 12:19:29.622: INFO: namespace e2e-tests-restart-master-j9x0f deletion completed in 6.156220683s

• [SLOW TEST:313.535 seconds]
[sig-storage] Volume Attach Verify [Feature:vsphere]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework.go:22
  verify volume remains attached after master kubelet restart
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/vsphere_volume_master_restart.go:144
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 11 12:19:29.666: INFO: Running AfterSuite actions on all node
Oct 11 12:19:29.666: INFO: Running AfterSuite actions on node 1

Ran 1 of 709 Specs in 313.923 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 708 Skipped PASS

```

Internally reviewed by VMware reviewers @divyenpatel @BaluDontu @tusharnt

**Release note**:
```
None
```
2017-11-08 19:31:03 -08:00
Kubernetes Submit Queue a701a42a82
Merge pull request #49763 from supereagle/versioned-group-clients
Automatic merge from submit-queue (batch tested with PRs 55331, 55272, 55228, 49763, 55242). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

use versiond group clients from client-go

**What this PR does / why we need it**:
Some **Deprecated** group clients are still used, replace them with versioned group clients.

**Which issue this PR fixes**: fixes #49760

**Special notes for your reviewer**:
/assign @caesarxuchao

**Release note**:
```release-note
NONE
```
2017-11-08 17:13:27 -08:00
Shyam Jeedigunta 79fd1296da Remove hack for CLUSTER_IP_RANGE in e2e framework no longer needed 2017-11-09 00:27:22 +01:00
pshahzeb f2a01faeff Test to verify volume attach status after master kubelet restart 2017-11-07 19:34:38 -08:00
Balu Dontu 0b3e28c883 vSphere scale tests 2017-11-07 15:33:27 -08:00
wackxu 249439f6a6 remove unused constant 2017-11-07 19:23:44 +08:00
Kubernetes Submit Queue dd00bc65f9
Merge pull request #55122 from MrHohn/fix-session-affinity-e2e
Automatic merge from submit-queue (batch tested with PRs 55114, 52976, 54871, 55122, 55140). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Don't share nodePort service in session affinity tests

**What this PR does / why we need it**:
From https://github.com/kubernetes/kubernetes/issues/54524, https://github.com/kubernetes/kubernetes/issues/54571.

Spent sometime to dig into it today, found this test is flaky mostly because it sends out service requests before kube-proxy reacts on the service session affinity update, hence multiple endpoints are responding instead of one. It is more flaky in alpha CIs probably due to different test sequences.

This PR creates a separate service with `sessionAffinity=ClientIP` so there wouldn't be a race between test begins and kube-proxy reacts. On the other hand, it also seems inappropriate to tweak the`config.NodePortService`, which is shared by other networking tests.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes # (will mark them fixed later).

**Special notes for your reviewer**:
/assign @m1093782566 @bowei 
cc @spiffxp

**Release note**:

```release-note
NONE

```
2017-11-06 23:19:21 -08:00
supereagle b694d51842 use versiond group clients from client-go 2017-11-07 14:47:22 +08:00
Kubernetes Submit Queue 743d11fbd5
Merge pull request #55047 from nikhiljindal/ingressTest
Automatic merge from submit-queue (batch tested with PRs 55093, 54966, 55047, 54971, 54786). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Adding an e2e test for gce multi cluster ingress

Basic test that verifies that multi cluster ingress gets the instance groups annotation.

Ref https://github.com/kubernetes/ingress-gce/issues/71

cc @csbell @G-Harmon @nicksardo @bowei 

```release-note
NONE```
2017-11-06 20:38:56 -08:00
Kubernetes Submit Queue 67c9e7419c
Merge pull request #54586 from DirectXMan12/bug/fix-incorrect-scale-and-hpa-gvks
Automatic merge from submit-queue (batch tested with PRs 53645, 54734, 54586, 55015, 54688). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix Incorrect Scale Subresources and HPA e2e ScaleTargetRefs

The HPA e2es failed to actually set `apiVersion` on the created HPAs, which previous was ignored.  Since the polymorphic scale client was merged, this behavior is no longer tolerated (it was never correct to begin with, but it accidentally worked).

Additionally, the `apps` resources have their own version of scale.  Until `apps/v1beta1` and `apps/v1beta2` go away, we need to support those versions in the scale client.

Together, these broke some of the HPA e2es.

Fixes #54574

```release-note
NONE
```
2017-11-06 15:33:43 -08:00
nikhiljindal 2e1d61a0d5 Adding an e2e test for gce multi cluster ingress 2017-11-06 13:48:35 -08:00
MrHohn e07a9c4ce6 Don't share nodePort service in session affinity tests 2017-11-05 22:42:33 -08:00
NickrenREN 1633cdde05 Add limitrange e2e test for LocalStorageCapacityIsolation feature 2017-11-04 12:47:34 +08:00
xiangpengzhao 32675e6f62 Remove check for SubResourcePodProxyVersion and SubResourceServiceAndNodeProxyVersion 2017-11-03 23:11:09 +08:00
Marcin Owsiany c2ab5c8246 Fix a typo. 2017-11-03 13:43:32 +01:00
Solly Ross 2c9fc43294 [client-go] Add apps.Scale support to Scale client
apps/v1betaX inadventertently contains its own variant of Scale.  In
order to support scaling Deployments, ReplicaSets, etc, we need to support
these versions of Scale as well.
2017-11-02 22:20:39 -04:00
Tim Allclair 671a6aa068
PodSecurityPolicy E2E tests 2017-11-01 16:00:32 -07:00
Kubernetes Submit Queue d118e44320
Merge pull request #54686 from m1093782566/gce-alpha-fail
Automatic merge from submit-queue (batch tested with PRs 54572, 54686). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix service session affinity e2e failure cases

**What this PR does / why we need it**:

Fix service session affinity e2e failure cases - debuging...

**Which issue this PR fixes**:

xref #54571 #54524

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```

/sig network
2017-10-30 21:45:17 -07:00
m1093782566 5210793802 debug e2e fail cases 2017-10-31 10:10:20 +08:00
Kubernetes Submit Queue 5ad34ac60a
Merge pull request #53909 from mml/conforgen
Automatic merge from submit-queue (batch tested with PRs 54165, 53909). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add conformance test test.

Add new `test/conformance` subdir, add code to generate a list of conformance tests, and add a test that verifies the list of tests.

The intent is to move management of the definition of conformance to sig-architecture.

```release-note
NONE
```
ref. #54726
2017-10-27 17:39:25 -07:00
Matt Liggett a5967cbaf1 Add framework.ConformanceIt as the new way to declare conformance tests.
Also rewrite all existing conformance tests to use this.
2017-10-27 15:29:59 -07:00
Kubernetes Submit Queue 84284c0ba4
Merge pull request #54331 from crimsonfaith91/dfailed
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

convert testFailedDeployment e2e test to integration test

**What this PR does / why we need it**:
This PR convert a deployment e2e test named "testFailedDeployment" to integration test.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: xref #52113 

**Release note**:

```release-note
NONE
```
/assign
2017-10-27 15:19:20 -07:00
Jun Xiang Tee 21bc54508b convert testFailedDeployment e2e test to integration test 2017-10-27 13:19:50 -07:00
Kevin 4c8539cece use core client with explicit version globally 2017-10-27 15:48:32 +08:00
Kubernetes Submit Queue 51652d1c23 Merge pull request #53816 from marun/remove-federation
Automatic merge from submit-queue (batch tested with PRs 54112, 54150, 53816, 54321, 54338). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Remove federation

This PR removes the federation codebase and associated tooling from the tree.

The first commit just removes the `federation` path and should be uncontroversial.  The second commit removes references and associated tooling and suggests careful review.

Requirements for merge:

- [x] Bazel jobs no longer hard-code federation as a target ([test infra #4983](https://github.com/kubernetes/test-infra/pull/4983))
- [x] `federation-e2e` jobs are not run by default for k/k

**Release note**:

```release-note
Development of Kubernetes Federation has moved to github.com/kubernetes/federation.  This move out of tree also means that Federation will begin releasing separately from Kubernetes.  The impact of this is Federation-specific behavior will no longer be included in kubectl, kubefed will no longer be released as part of Kubernetes, and the Federation servers will no longer be included in the hyperkube binary and image.
```

cc: @kubernetes/sig-multicluster-pr-reviews @kubernetes/sig-testing-pr-reviews
2017-10-26 17:07:28 -07:00
Kubernetes Submit Queue 90a6a4f1f8 Merge pull request #53569 from leblancd/v6_e2e_ext_ip
Automatic merge from submit-queue (batch tested with PRs 53000, 52870, 53569). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fallback to internal addrs in e2e tests when no external addrs available

This change modifies the way that config.NodeIP is selected at the
start of e2e Networking tests such that if no external addresses are
available from the cloud provider (e.g. either no cloud provider being
used [baremetal or VMs], or the provider doesn't have external IPs
configured), then one of the internal addresses is used.

Without this change, the e2e service-related Networking tests will always
panic when config.ExternalAddrs[0] is accessed and the slice is empty.

This change eliminates the panic, and in some setups, the fallback choice
of using an internal address will provide the necessary connectivity
for the e2e Networking tests to access each node.

fixes #53568



**What this PR does / why we need it**:
This change modifies the way that config.NodeIP is selected at the
start of e2e Networking tests such that if no external addresses are
available from the cloud provider (e.g. either no cloud provider being
used [baremetal or VMs], or the provider doesn't have external IPs
configured), then one of the internal addresses is used.

Without this change, the e2e service-related Networking tests will always
panic when no cloud provider is being used, or the cloud provider does
not have external addresses configured.

This change eliminates the panic, and in some setups, the fallback choice
of using an internal address will provide the necessary connectivity
for the e2e Networking tests to access each node.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #53568

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-10-26 14:18:29 -07:00
Maru Newby adc338d330 Remove all traces of federation 2017-10-26 13:37:37 -07:00
Kubernetes Submit Queue d9e529adbd Merge pull request #54105 from janetkuo/e2e-deploy-inte-label-adopted
Automatic merge from submit-queue (batch tested with PRs 54593, 54607, 54539, 54105). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

 Move deployment e2e test for hash label adoption to integration

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: #52113

**Special notes for your reviewer**: depends on #53918

**Release note**:

```release-note
NONE
```
2017-10-26 11:13:44 -07:00
Kubernetes Submit Queue 152e23caee Merge pull request #53760 from m1093782566/session-ci
Automatic merge from submit-queue (batch tested with PRs 53760, 48996, 51267, 54414). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix CI error for service session affinity

**What this PR does / why we need it**:

Fix CI error for service session affinity -- debug

**Which issue this PR fixes**: 

fixes #53741

**Special notes for your reviewer**:

I remove the [slow] tag so that these test cases can be run in PR request. We may need to add back the [slow] tag when this PR is ready to get in.

**Release note**:

```release-note
NONE
```
2017-10-25 17:37:00 -07:00
Zihong Zheng 4987314b31 Fix nil pointer panic in service LB e2e tests 2017-10-24 20:01:50 -07:00
Kubernetes Submit Queue 8d3a19229f Merge pull request #54111 from freehan/neg-test
Automatic merge from submit-queue (batch tested with PRs 54107, 54184, 54377, 54094, 54111). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

add e2e tests for NEG

This PR includes tests:
- ingress conformance test
- scaling up and down backends
- switching backend between IG and NEG
- rolling update backend should not cause service disruption

```release-note
NONE
```
2017-10-24 15:59:17 -07:00
Kubernetes Submit Queue fd36b824e5 Merge pull request #54229 from jsafrane/speed-up-volume-test
Automatic merge from submit-queue (batch tested with PRs 54229, 54380, 54302, 54454). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Speed up volume tests by reducing pod grace period

busybox based pods don't react to docker stop nicely. By reducing the pod grace period we can save ~29 seconds per volume test.

```release-note
NONE
```
2017-10-24 08:35:02 -07:00
Janet Kuo f7f1f17c89 Move deployment e2e test for hash label adoption to integration 2017-10-23 21:35:19 -07:00
Janet Kuo 1ca6120af3 Move deployment e2e test for rollback with no revision to integration 2017-10-23 18:58:05 -07:00
Minhan Xia 23ff1bb704 add e2e test for syncing endpoints to NEG 2017-10-23 17:21:25 -07:00
Kubernetes Submit Queue cf84e37c2c Merge pull request #53924 from dims/use-in-cluster-config-before-default
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Try in-cluster config before using localhost:8080

**What this PR does / why we need it**:

When starting an e2e test in a pod in a cluster, if the host is
not specified in the command line, we default to using
'http://127.0.0.1:8080' currently. We should be discovering the
host/port using the in-cluster config and using that if
possible.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

Fixes #53894

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-10-19 18:26:10 -07:00
Kubernetes Submit Queue adb7aed2b0 Merge pull request #53841 from crimsonfaith91/deployment-upgrade
Automatic merge from submit-queue (batch tested with PRs 52794, 54243, 54248, 53491, 53841). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

revamp deployment upgrade test

**What this PR does / why we need it**:
This PR revamps existing deployment upgrade test, removing redundant steps that is covered by replicaset upgrade test.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: xref #52113

**Special notes for your reviewer**:
The replicaset upgrade test PR is here: #52449

**Release note**:

```release-note
NONE
```
2017-10-19 16:27:16 -07:00
Jun Xiang Tee dd3b782058 revamp deployment upgrade test 2017-10-19 12:49:08 -07:00
Zihong Zheng 8f216be894 Skip ILB e2e test on GCP if cluster size exceeds limit 2017-10-19 11:42:16 -07:00
Jan Safranek 75cc26fb65 Allow Ceph server some time to start
Ceph server needs to create our "foo" volume on startup. It keeps the image
small, however it makes the server container start slow.

Add sleep before the server is usable. Without this PR, all pods that use Ceph
fail to start for couple of seconds with cryptic "image foo not found" error
and it clutters logs and pod logs and makes it harder to spot real errors.
2017-10-19 14:29:58 +02:00
Jan Safranek acbe62cff5 Speed up volume tests by reducing pod grace period
busybox based pods don't react to docker stop nicely. By reducing the pod
grace period we can save ~29 seconds per volume test.
2017-10-19 14:19:42 +02:00
Kubernetes Submit Queue 809987dfbd Merge pull request #54034 from dixudx/e2e_multiarch_busybox
Automatic merge from submit-queue (batch tested with PRs 52753, 54034, 53982, 54209). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

use multi-arch busybox for e2e

**What this PR does / why we need it**:
Since [multi-arch is supported already for Official images on Dockerhub](https://blog.docker.com/2017/09/docker-official-images-now-multi-platform/), we can use `busybox` directly instead of having our own `GetBusyBoxImage` for multi-arch. 

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: 
xref #53958

**Special notes for your reviewer**:
/assign @mkumatag @ixdy 

**Release note**:

```release-note
Use multi-arch busybox image for e2e
```
2017-10-19 02:45:19 -07:00
Kubernetes Submit Queue bd388e0d82 Merge pull request #51310 from xiangpengzhao/sc-eg
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Replace storage-class annotations with field in examples

**What this PR does / why we need it**:
storage class is already GA. Replace annotations with field `StorageClassName` in examples.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #51435 (update: thanks @gyliu513 for the issue)
ref: https://github.com/kubernetes/kubernetes/pull/50654#discussion_r134954171

**Special notes for your reviewer**:
We may also want to remove the beta annotations in 1.8 since the field will have already been in two releases. If @kubernetes/sig-storage-api-reviews confirm this, I'd like to help remove it.

/cc @liggitt @jsafrane @msau42 

**Release note**:

```release-note
NONE
```
2017-10-18 20:31:15 -07:00
Di Xu f7f3577035 use multi-arch busybox for e2e 2017-10-19 10:36:31 +08:00
Minhan Xia a324248287 add ingress conformance test for NEG 2017-10-18 15:37:32 -07:00
Dr. Stefan Schimanski cad0364e73 Update bazel 2017-10-18 17:24:04 +02:00
Dr. Stefan Schimanski 7773a30f67 pkg/api/legacyscheme: fixup imports 2017-10-18 17:23:55 +02:00
m1093782566 67f4461173 fix CI error for session affinity 2017-10-17 11:17:33 +08:00
Kubernetes Submit Queue 2956c16328 Merge pull request #52449 from crimsonfaith91/rs-upgrade
Automatic merge from submit-queue (batch tested with PRs 53106, 52193, 51250, 52449, 53861). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

add replicaset upgrade test

**What this PR does / why we need it**:
This PR adds existing replicaset upgrade test.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: xref #52118

**Release note**:

```release-note
NONE
```
2017-10-16 14:47:26 -07:00
Jun Xiang Tee db2c027154 add replicaset upgrade test 2017-10-16 12:10:04 -07:00
Jeff Grafton aee5f457db update BUILD files 2017-10-15 18:18:13 -07:00
Davanum Srinivas 3aba6451a7 Try in-cluster config before using localhost:8080
When starting an e2e test in a pod in a cluster, if the host is
not specified in the command line, we default to using
'http://127.0.0.1:8080' currently. We should try the in-cluster
config, save it to a temporary file and use that with kubectl
2017-10-14 19:30:19 -04:00
Kubernetes Submit Queue 7165733037 Merge pull request #53677 from janetkuo/sts-e2e-typo
Automatic merge from submit-queue (batch tested with PRs 53678, 53677, 53682, 53673). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix typo in StatefulSet e2e test

Found it while reviewing #53218

**Release note**:

```release-note
NONE
```
2017-10-10 18:36:02 -07:00
Janet Kuo 925473117f Fix typo in StatefulSet e2e test 2017-10-10 14:19:31 -07:00
jeff vance d7e18ac5d0 wait for pod to be fully deleted 2017-10-10 10:58:50 -07:00
Kubernetes Submit Queue 8849878bab Merge pull request #50447 from m1093782566/e2e-session-affinity
Automatic merge from submit-queue (batch tested with PRs 50447, 53308). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

[e2e] add service session affinity test case

**What this PR does / why we need it**:

**Which issue this PR fixes**: 

Add service session affinity test case for e2e

fixes #31712

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-10-09 00:32:11 -07:00
Dane LeBlanc 81ff1f87f0 Fallback to internal addrs in e2e tests when no external addrs available
This change modifies the way that config.NodeIP is selected at the
start of e2e Networking tests such that if no external addresses are
available from the cloud provider (e.g. either no cloud provider being
used [baremetal or VMs], or the provider doesn't have external IPs
configured), then one of the internal addresses is used.

Without this change, the e2e service-related Networking tests would always
panic when config.ExternalAddrs[0] is accessed and the slice is empty.

This change eliminates the panic, and in some setups, the fallback choice
of using an internal address will provide the necessary connectivity
for the e2e Networking tests to access each node.

fixes #53568
2017-10-08 10:53:19 -04:00
Kubernetes Submit Queue edd6c8e4af Merge pull request #52688 from tanshanshan/new-test-pr
Automatic merge from submit-queue (batch tested with PRs 53350, 52688, 53531, 52515). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

PodReady should be replaced with podutil.IsPodReady

**What this PR does / why we need it**:
PodReady should be replaced with podutil.IsPodReady.
Thanks.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-10-06 21:32:11 -07:00
Kubernetes Submit Queue 86b23dfb8b Merge pull request #53227 from vmware/DummyVME2ETestvSphereK8s
Automatic merge from submit-queue (batch tested with PRs 53227, 53120). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

E2E test to verify clean up of stale dummy VM for vSphere dynamic provisioning

Verify if the dummy stale VM's created during dynamic provisioning are deleted by the clean up routine in vSphere cloud Provider.

**Testing Done:**
- Create a storage class with invalid policy on a VSAN datastore.
- Create a PVC using the above storage class
- Verify if the PVC is not bound.
- Delete the PVC.
- Sleep for 6 minutes so that vSphere Cloud Provider clean up routine can delete the stale dummy VM's.
- Verify if the VM is not present. Otherwise fail the test.

@rohitjogvmw @divyenpatel 


```release-note
None
```
2017-10-05 13:07:35 -07:00
Dane LeBlanc 4be181739e e2e tests need a ping6 test for IPv6-only clusters
e2e tests provide only an (IPv4) ping test for external connectivity.

We need a way to conditionally run a ping6 external connectivity check,
and disable the (IPv4) ping-based external connectivity check,
for end-to-end testing on IPv6-only clusters.

This feature will be needed for creating gating IPv6 CI tests.

fixes #53383
2017-10-03 19:51:53 -04:00
Balu Dontu f15aa051cd Verify clean up of stale VM's for vSphere dynamic provisioning 2017-10-03 16:24:46 -07:00
Kubernetes Submit Queue 028ee090f6 Merge pull request #49393 from hongchaodeng/etcd_update
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

etcd: update version to 3.1.10

ref: https://github.com/kubernetes/kubernetes/issues/49386

Need image pushed:
```
gcr.io/google_containers/etcd:3.1.10
```
2017-10-02 23:29:51 -07:00
Kris 546b28ac5f Version should be quoted so jq doesn't interpret it as numeric 2017-10-02 13:01:03 -07:00
Hongchao Deng 39e5a56691 etcd: update version to 3.1.10 2017-10-02 12:27:46 -07:00
Kris 8e4bc73a74 Fix sed command to not try shell redirection
RunCmd uses Go's os/exec library to run commands directly. Since these
are not run through a shell, we can't use shell syntax for piping for
file redirection. The proper way to do that is to create a Command
object and set the Std{in,out,err} pipes appropriately. Luckily sed
can handle the behavior we need without having to manually set this up.
2017-09-29 10:01:11 -07:00
xiangpengzhao 4bc05f4fc2 Remove storage-class annotations in examples 2017-09-29 10:09:30 +08:00
Kubernetes Submit Queue 1f45cd06b3 Merge pull request #52250 from RenaudWasTaken/e2e-device-plugin-failure
Automatic merge from submit-queue (batch tested with PRs 50988, 50509, 52660, 52663, 52250). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Added device plugin e2e kubelet failure test

Signed-off-by: Renaud Gaubert <renaud.gaubert@gmail.com>

**What this PR does / why we need it**:
This is part of issue #52859 (fixes #52859)

This PR adds a e2e_node test for the device plugin.
Specifically it implements testing of failure handling by the device plugin components in case Kubelet restart / crashes.

I might try to refactor the GPU tests in a later PR.

**Special notes for your reviewer**:
@jiayingz @vishh 

**Release note**:
```release-note
NONE
```
2017-09-27 05:32:30 -07:00
Kubernetes Submit Queue 52f96dae45 Merge pull request #52896 from janetkuo/deploy-hash-inte
Automatic merge from submit-queue (batch tested with PRs 52721, 53057, 52493, 52998, 52896). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Move deployment collision avoidance e2e test to integration

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: ref #52113

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-09-26 15:51:25 -07:00
Kubernetes Submit Queue 8683b3d530 Merge pull request #52961 from sbezverk/add_signer_for_skeleton
Automatic merge from submit-queue (batch tested with PRs 51067, 52319, 52803, 52961, 51972). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..

Add support for skeleton in GetSigner

Adding support for skeleton to GetSigner to be able to run
e2e tests against a bare metal multinode cluster.
Closes #35613
2017-09-25 14:50:56 -07:00
Janet Kuo 24eb21e6cf Use PollImmediate and shorter interval in integration test 2017-09-25 14:17:43 -07:00
Janet Kuo 241f4fbc98 Move deployment collision avoidance e2e test to integration 2017-09-25 10:27:31 -07:00
Yu-Ju Hong 331628b7dc Move prometheus metrics for docker operations into dockershim 2017-09-25 10:03:17 -07:00
Kubernetes Submit Queue cb6f62d92f Merge pull request #52905 from aleksandra-malinowska/autoscaling-fix-7
Automatic merge from submit-queue (batch tested with PRs 52905, 52766). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..

Refactor parsing cluster autoscaler status, add logging error

Minor improvements to autoscaling test suite and e2e framework.
2017-09-25 04:02:51 -07:00
tanshanshan 65b59474dc fix-todo 2017-09-25 15:42:21 +08:00
Serguei Bezverkhi 6201727935 Add support for skeleton in GetSigner
Adding support for skeleton to GetSigner to be able to run
e2e tests against a bare metal multinode cluster.
2017-09-24 20:26:28 -04:00
Kubernetes Submit Queue 8c29b6540b Merge pull request #52751 from MrHohn/e2e-service-cleanup-fix
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..

Fix GCE LB resource cleanup for service e2e tests.

**What this PR does / why we need it**: Fix GCE LB resource cleanup logic.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #52347

**Special notes for your reviewer**:
/assign @shyamjvs @nicksardo 

**Release note**:

```release-note
NONE
```
2017-09-24 05:21:16 -07:00
Kubernetes Submit Queue 2e7efd3af3 Merge pull request #52485 from flix-tech/sig-test-45947-remove-flag
Automatic merge from submit-queue (batch tested with PRs 52485, 52443, 52597, 52450, 51971). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..

Removing PrometheusPushGateway --prom-push-gateway flag from e2e tests.

**What this PR does / why we need it**: Removing obsolete PrometheusPushGateway --prom-push-gateway flag from e2e tests.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #45947

**Special notes for your reviewer**:

**Release note**:

```release-note
Removing `--prom-push-gateway` flag from e2e tests
```
2017-09-23 18:48:50 -07:00
Kubernetes Submit Queue 7e7bcabe17 Merge pull request #52355 from davidz627/e2e_nil
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..

E2E test to make sure controller does not crash because of nil volume spec

Fixes #49521

Tests fix of issue referenced in #49418
2017-09-23 15:25:07 -07:00
Kubernetes Submit Queue 044e79c714 Merge pull request #52134 from yujuhong/minor-test-fixes
Automatic merge from submit-queue (batch tested with PRs 50392, 52108, 52083, 52134, 51526). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..

e2e: minor changes to network/service testing utils

Add more logging to help debug. Also refactor several functions to improve
reusability.
2017-09-23 07:14:05 -07:00
Hemant Kumar 381e334d87 Fix volume metric flake
Make sure we only run this test in environments
that support it.
2017-09-22 16:30:11 -04:00
Jiaying Zhang ba40bee5c1 Modified test/e2e_node/gpu-device-plugin.go to make sure it passes. 2017-09-22 20:21:26 +02:00
Aleksandra Malinowska 88da2c1c70 refactor parsing cluster autoscaler status 2017-09-22 12:26:50 +02:00
Renaud Gaubert 6993612cec Added device plugin e2e kubelet failure test
Signed-off-by: Renaud Gaubert <renaud.gaubert@gmail.com>
2017-09-22 01:24:01 +02:00
Kubernetes Submit Queue 542486186f Merge pull request #52732 from shyamjvs/fix-metrics-perf-tests
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..

Increase api latency threshold for cluster-scoped list calls

Recent change from @smarterclayton (https://github.com/kubernetes/kubernetes/pull/52237) added scope to apiserver metrics. As a result, our current threshold for list calls is no longer sufficient for all-namespace calls which are now being measured separately from namespaced lists. For e.g (from our [last 5k run](https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-performance/37)):

```
WARNING Top latency metric: {Resource:pods Subresource: Verb:LIST Scope:cluster Latency:{Perc50:4.498374s Perc90:7.548079s Perc99:8.169389s Perc100:0s} Count:1400}
```

cc @kubernetes/sig-scalability-misc @kubernetes/sig-api-machinery-misc @wojtek-t
2017-09-21 10:49:54 -07:00
Shyam Jeedigunta f373645865 Increase api latency threshold for cluster-scoped list calls 2017-09-21 13:33:22 +02:00
Kubernetes Submit Queue 654c522e4c Merge pull request #52477 from jamiehannaford/kubernetes-anywhere
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..

Support kubernetes-anywhere provider

**What this PR does / why we need it**:

Implements a new `kubernetes-anywhere` provider to allow upgrade testing in the e2e binary. This is the final step to allow https://github.com/kubernetes/test-infra/pull/4495 and https://github.com/kubernetes/kubernetes-anywhere/pull/450.

**Which issue this PR fixes**:

https://github.com/kubernetes/kubeadm/issues/311

**Special notes for your reviewer**:

Some questions I had

- Does the `--provider` flag specified [here](dbbf6261e0/jobs/config.json (L8587)) get sent to the flag defined [here](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/test_context.go#L219)? Or should I add another `--provider` flag inside `--upgrade_args` like this: `--upgrade_args=... --provider=kubernetes-anywhere`?
- Is it necessary to add waiting logic after the `make` command, or will it implicitly handle that by itself?

Some other points:

- I chose `sed` to manipulate the current kubernetes-anywhere `.config` rather than duplicating another [`anywhere.go`](https://github.com/kubernetes/test-infra/blob/master/kubetest/anywhere.go). One suggestion was to use `jq` but since the config on disk is not serialized to JSON yet, I'm not sure how that'd work.
- Since I don't have a GCE/GKE account or vCenter, I can't actually verify the e2e binary works. I've managed to build it, but if somebody could quickly run a smoke test, I'd appreciate it. This is my first poke around test-infra and e2e, so there might be some plumbing missing

/cc @jessicaochen @luxas @pipejakob @roberthbailey
2017-09-20 15:20:47 -07:00
Zihong Zheng 5532e24280 Fix GCE LB resource cleanup for service e2e tests. 2017-09-19 15:42:41 -07:00
Jamie Hannaford 69f5feb295 Support kubernetes-anywhere provider 2017-09-15 11:13:08 +02:00
Kubernetes Submit Queue 93ddb7be5f Merge pull request #52237 from smarterclayton/watch_metric
Automatic merge from submit-queue (batch tested with PRs 51824, 50476, 52451, 52009, 52237)

Improve apiserver metrics reporting

Normalize "WATCHLIST" to "WATCH", add "scope" to the other metrics (listing 50k pods is != listing pods in a namespace), and add a new scope "resource" to cover individual resource calls.

This roughly aligns metrics with our ACL model (technically resource scope is GET, but POST to a subresource and POST to a namespace are not the same thing).

```release-note
WATCHLIST calls are now reported as WATCH verbs in prometheus for the apiserver_request_* series.  A new "scope" label is added to all apiserver_request_* values that is either 'cluster', 'resource', or 'namespace' depending on which level the query is performed at.
```
2017-09-15 01:08:11 -07:00
Kubernetes Submit Queue 5d995e3f7b Merge pull request #52372 from caesarxuchao/remove-config-copy
Automatic merge from submit-queue (batch tested with PRs 52376, 52439, 52382, 52358, 52372)

Remove the conversion of client config

It was needed because the clientset code in client-go was a copy of the clientset code in Kubernetes.. client-go is authoritative now, so we can remove the nasty copy.
2017-09-14 15:27:17 -07:00
Niels-Ole Kühl 56247c4e83 Removing PrometheusPushGateway --prom-push-gateway flag from e2e tests. 2017-09-14 14:13:31 +02:00
David Zhu 7e10741f94 E2E test to make sure controller does not crash because of nil volume spec. 2017-09-13 17:01:24 -07:00
Aleksandra Malinowska c173296632 log gcloud command error 2017-09-13 11:56:55 +02:00
Chao Xu 6c5a8d5db9 Remove the conversion of client config, because client-go is authoratative now 2017-09-12 16:02:17 -07:00
Anthony Yeh bff5f7e6b0
StatefulSet: Deflake e2e RunHostCmd more.
It turns out that at some points while the Node is recovering from a
reboot, we get a different kind of error ("unable to upgrade
connection"). Since we can't distinguish these transient errors from an
error encountered after successfully executing the remote command,
let's just retry all errors for 5min. If this doesn't work, I'm gonna
blame it on sig-node.
2017-09-12 10:12:46 -07:00
Clayton Coleman 30a92a8f0a
Report scope in e2e test metrics 2017-09-11 22:13:55 -04:00
Kubernetes Submit Queue ad0d36f0f0 Merge pull request #52111 from MrHohn/kube-proxy-upgrade-image
Automatic merge from submit-queue

Pipe in upgrade image target for kube-proxy migration tests

**What this PR does / why we need it**:
https://k8s-testgrid.appspot.com/sig-network#gci-gce-latest-upgrade-kube-proxy-ds&width=20
and
https://k8s-testgrid.appspot.com/sig-network#gci-gce-latest-downgrade-kube-proxy-ds&width=20
are still failing.

Reproduced it locally and found node image is being default to debian during upgrade (it was gci before upgrade) because we don't pass in `gci` via `--upgrade--target`. And for some reasons (haven't figured out yet), the upgraded node uses debian image with gci startupscripts...

This PR pipes in `--upgrade-target` for kube-proxy migration tests, hopefully in conjunction with https://github.com/kubernetes/test-infra/pull/4447 it will bring the tests back to normal.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #NONE 

**Special notes for your reviewer**:
Sorry for bothering again.
/assign @krousey 

**Release note**:

```release-note
NONE
```
2017-09-07 20:46:04 -07:00
Yu-Ju Hong 0b38495c42 e2e: minor changes to network/service testing utils
Add more logging to help debug. Also refactor several functions to improve
reusability.
2017-09-07 18:43:47 -07:00
Kubernetes Submit Queue f4f21b3f06 Merge pull request #52054 from janetkuo/pause-dep-integra
Automatic merge from submit-queue (batch tested with PRs 52097, 52054)

Move paused deployment e2e tests to integration

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: xref #52113

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-09-07 15:28:25 -07:00
Zihong Zheng 0cb6471f35 Pipe in upgrade image target to kube-proxy migration tests 2017-09-07 13:39:27 -07:00
Kubernetes Submit Queue 507af4b9c2 Merge pull request #52057 from enisoc/sts-deflake
Automatic merge from submit-queue

StatefulSet: Deflake e2e RunHostCmd.

The initial retry up to 20s was giving up too soon. I'm seeing this test flake because the Node rebooted and it takes ~2min to recover. Now StatefulSet RunHostCmd calls will use the same 5min timeout as with other Pod state checks.

ref #48031
2017-09-07 11:42:32 -07:00
Janet Kuo 124344a1a4 Move paused deployment e2e tests to integration 2017-09-06 18:12:28 -07:00
Anthony Yeh b4f639f57a
StatefulSet: Deflake e2e RunHostCmd.
The initial retry up to 20s was giving up too soon.
I'm seeing this test flake because the Node rebooted and it takes ~2min
to recover.
Now StatefulSet RunHostCmd calls will use the same 5min timeout as with
other Pod state checks.
2017-09-06 17:51:11 -07:00
Yu-Ju Hong bb50086b8f e2e: network tiers should retry on 404 errors
The feature is still Alpha and at times, the IP address previously used
by the load balancer in the test will not completely freed even after
the load balancer is long gone. In this case, the test URL with the IP
would return a 404 response. Tolerate this error and retry until the new
load balancer is fully established.
2017-09-06 13:16:28 -07:00
Jordan Liggitt f61ac93a0d
Fix dynamic discovery error in e2e 2017-09-05 23:01:54 -04:00
Kubernetes Submit Queue 1732a8b9bd Merge pull request #51562 from nicksardo/gce-attempt-firewall
Automatic merge from submit-queue (batch tested with PRs 51915, 51294, 51562, 51911)

GCE: Gracefully handle permission errors when attempting to create firewall rules

Purpose of this PR is to raise events from the GCE cloud provider if the GCE service account does not have the permissions necessary to create/update/delete firewall rules. 

Fixes #51812

**Release note**:
```release-note
NONE
```

Example Events:

```
Events:
  FirstSeen     LastSeen        Count   From                    SubObjectPath   Type            Reason                          Message
  ---------     --------        -----   ----                    -------------   --------        ------                          -------
  2m            2m              1       service-controller                      Normal          EnsuringLoadBalancer            Ensuring load balancer
  2m            2m              1       gce-cloudprovider                       Normal          LoadBalancerManualChange        Firewall change required by network admin: `gcloud compute firewall-rules create aa8a1dd628ddb11e78ce042010a80000 --network https://www.googleapis.com/compute/v1/projects/playground/global/networks/e2e-test-nicksardo --description "{\"kubernetes.io/service-name\":\"default/myechosvc1\", \"kubernetes.io/service-ip\":\"\"}" --allow tcp:9000 --source-ranges 0.0.0.0/0 --target-tags e2e-test-nicksardo-minion --project playground`
  2m            2m              1       gce-cloudprovider                       Normal          LoadBalancerManualChange        Firewall change required by network admin: `gcloud compute firewall-rules create k8s-1aee5045e658d174-node-hc --network https://www.googleapis.com/compute/v1/projects/playground/global/networks/e2e-test-nicksardo --description "" --allow tcp:10256 --source-ranges 130.211.0.0/22,35.191.0.0/16,209.85.152.0/22,209.85.204.0/22 --target-tags e2e-test-nicksardo-minion --project playground`
  1m            1m              1       service-controller                      Normal          EnsuredLoadBalancer             Ensured load balancer
```
2017-09-05 08:47:28 -07:00
Jordan Liggitt 5acd5b52f4
Tolerate group discovery errors in e2e ns cleanup 2017-09-04 17:31:17 -04:00
Nick Sardo 676b95e097 Gracefully handle permission errors when attempting to create firewall rules 2017-09-04 09:00:49 -07:00
cedric lamoriniere 1dbef2f113
Job failure policy support in JobController
Job failure policy integration in JobController. From the
JobSpec.BackoffLimit the JobController will define the backoff
duration between Job retry.

It use the ```workqueue.RateLimitingInterface``` to store the number of
"retry" as "requeue" and the default Job backoff initial duration is set
during the initialization of the ```workqueue.RateLimiter.

Since the number of retry for each job is store in a local structure
"JobController.queue" if the JobController restarts the number of retries
will be lost and the backoff duration will be reset to 0.

Add e2e test for Job backoff failure policy
2017-09-03 12:07:12 +02:00
Manjunath A Kumatagi ee4d54c70c Port e2e tests for multi architecture 2017-09-01 05:40:52 +05:30
Manjunath A Kumatagi 22c3a590d1 Fix bazel 2017-09-01 05:39:00 +05:30
Kubernetes Submit Queue 022919d1a4 Merge pull request #51483 from yujuhong/e2e-net-tiers
Automatic merge from submit-queue

e2e: Add tests for network tiers in GCE

This test depends on #51301, which adds the new feature. Only the `e2e: Add tests for network tiers in GCE` commit is new.
#51301 should pass this new test.
2017-08-30 06:55:35 -07:00
Kubernetes Submit Queue 04bc4ec716 Merge pull request #50398 from pci/gcloud-compute-list
Automatic merge from submit-queue (batch tested with PRs 47054, 50398, 51541, 51535, 51545)

Switch away from gcloud deprecated flags in compute resource listings

**What is fixed**

Remove deprecated `gcloud compute` flags, see linked issue.

**Which issue this PR fixes**:

fixes #49673 

**Special notes for your reviewer**:

The change in `gcloudComputeResourceList` in `test/e2e/framework/ingress_utils.go` isn't strictly needed as currently no affected resources are called on within that file, however the function has the _potential_ to access affected resources so I covered it as well. Happy to change if deemed unnecessary.

**Release note**:

```release-note
NONE
```
2017-08-30 01:51:29 -07:00
Kubernetes Submit Queue b4d08cb9b5 Merge pull request #50940 from MrHohn/kube-proxy-ds-upgrade-tests
Automatic merge from submit-queue (batch tested with PRs 51228, 50185, 50940, 51544, 51543)

Add upgrades tests for kube-proxy daemonset migration path

**What this PR does / why we need it**:
From #23225, this is a part of setting up CIs to validate the kube-proxy migration path (static pods -> daemonset and reverse).
The other part of the works (adding real CIs that run these tests) will be in a separate PR against [kubernetes/test-infra](https://github.com/kubernetes/test-infra).

Though this is currently blocked by #50705.

**Special notes for your reviewer**:
cc @roberthbailey  @pwittrock 

**Release note**:

```release-note
NONE
```
2017-08-29 23:54:30 -07:00
Kubernetes Submit Queue 01e961b380 Merge pull request #49749 from sbezverk/e2e_selinux_local_starage_test
Automatic merge from submit-queue (batch tested with PRs 51377, 46580, 50998, 51466, 49749)

Adding e2e SELinux test for local storage

Adding e2e test for SELinux enabled local storage
/sig storage
Closes #45054
2017-08-29 22:57:11 -07:00
Philip Ingrey 697f92a5d2
Switch away from gcloud deprecated flags in compute resource listings 2017-08-30 06:41:09 +01:00
Shyam JVS 36910232ab Merge pull request #51343 from shyamjvs/correct-cluster-ip-range
Correct default cluster-ip-range subnet
2017-08-30 01:31:50 +02:00
Shyam Jeedigunta 2df4698473 Correct default cluster-ip-range subnet 2017-08-29 23:15:23 +02:00
Zihong Zheng 5dc0845e36 Add upgrades tests for kube-proxy daemonset migration path 2017-08-29 10:16:37 -07:00
Kubernetes Submit Queue 25da6e64e2 Merge pull request #48454 from weiwei04/check-job-activeDeadlineSeconds
Automatic merge from submit-queue (batch tested with PRs 44719, 48454)

check job ActiveDeadlineSeconds

**What this PR does / why we need it**:

enqueue a sync task after ActiveDeadlineSeconds

**Which issue this PR fixes** *: 

fixes #32149

**Special notes for your reviewer**:

**Release note**:

```release-note
enqueue a sync task to wake up jobcontroller to check job ActiveDeadlineSeconds in time
```
2017-08-29 08:25:06 -07:00
Wei Wei 46239ea30b check job ActiveDeadlineSeconds 2017-08-29 20:15:11 +08:00
Andrzej Wasylkowski 0c1ab5597e Renamed ClusterSize and WaitForClusterSize to NumberOfReadyNodes and WaitForReadyNodes, respectively. 2017-08-29 11:53:17 +02:00
Andrzej Wasylkowski 9b0f4c9f7c Added an end-to-end test ensuring that Cluster Autoscaler does not scale up when all pending pods are unschedulable. 2017-08-29 11:52:26 +02:00
Yu-Ju Hong f33c37e102 e2e: Add tests for network tiers in GCE 2017-08-28 18:40:20 -07:00
Serguei Bezverkhi d904e52570 Adding e2e SELinux test for local storage
Also changing provisioner bootstrapper frpm Pod to Job
2017-08-28 19:12:17 -04:00
Kubernetes Submit Queue b65d665b99 Merge pull request #51264 from m1093782566/e2e-maxTries
Automatic merge from submit-queue (batch tested with PRs 50889, 51347, 50582, 51297, 51264)

Fix e2e network util wrong output message

**What this PR does / why we need it**:

See https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/networking_utils.go#L217

and 

https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/networking_utils.go#L273

I assume it should be `minTries` -> `MaxTries`

This PR fixes the wrong output message.

**Which issue this PR fixes**: fixes #51265

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-08-25 22:43:37 -07:00
Kubernetes Submit Queue 11299e363c Merge pull request #51282 from shyamjvs/new-allowed-not-ready-semantics
Automatic merge from submit-queue

AllowedNotReadyNodes allowed to be not ready for absolutely *any* reason

It's as good as we allow those many nodes to be not part of the cluster at all, ever.

Btw - currently our 5k-node correctness test fails if "kubelet stopped posting node status" or "route not created", etc (ref: https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-correctness/3/build-log.txt)

cc @kubernetes/sig-scalability-misc
2017-08-25 05:00:32 -07:00
Kubernetes Submit Queue 81363abc20 Merge pull request #51230 from enisoc/sts-deflake-exec
Automatic merge from submit-queue (batch tested with PRs 50213, 50707, 49502, 51230, 50848)

StatefulSet: Deflake e2e `kubectl exec` commands.

This may help with another source of flakiness found while investigating #48031.

We seem to get a lot of flakes due to "connection refused" while running `kubectl exec`. I can't find any reason this would be caused by the test flow, so I'm adding retries to see if that helps.
2017-08-25 01:10:35 -07:00
Kubernetes Submit Queue ce3e2d9b10 Merge pull request #51224 from enisoc/sts-deflake-restart
Automatic merge from submit-queue (batch tested with PRs 51224, 51191, 51158, 50669, 51222)

StatefulSet: Deflake e2e "restart" phase.

This addresses another source of flakiness found while investigating #48031.

The test used to scale the StatefulSet down to 0, wait for ListPods to return 0 matching Pods, and then scale the StatefulSet back up.

This was prone to a race in which StatefulSet was told to scale back up before it had observed its own deletion of the last Pod, as evidenced by logs showing the creation of Pod ss-1 prior to the creation of the replacement Pod ss-0.

Instead, we now wait for the controller to observe all deletions before scaling it back up. This should fix flakes of the form:

```
Too many pods scheduled, expected 1 got 2
```
2017-08-24 22:59:28 -07:00
Anthony Yeh 05d6c8a6c2
StatefulSet: Deflake e2e `kubectl exec` commands.
We seem to get a lot of flakes due to "connection refused" while running
`kubectl exec`. I can't find any reason this would be caused by the test
flow, so I'm adding retries to see if that helps.
2017-08-24 11:42:05 -07:00
Shyam Jeedigunta b374416807 AllowedNotReadyNodes allowed to be not ready for absolutely *any* reason 2017-08-24 19:39:26 +02:00
m1093782566 b8edd9b885 fix e2e network wrong output message 2017-08-24 19:39:42 +08:00
m1093782566 4356b49415 e2e test session affinity 2017-08-24 19:36:49 +08:00
Kubernetes Submit Queue db928095a0 Merge pull request #50947 from shyamjvs/clusterIpRange-ginkgo
Automatic merge from submit-queue (batch tested with PRs 51108, 51035, 50539, 51160, 50947)

Auto-calculate CLUSTER_IP_RANGE based on cluster size

In preparation for eliminating CLUSTER_IP_RANGE env var from job configs, making it less error prone while folks try to start their own large cluster tests (https://github.com/kubernetes/kubernetes/issues/50907).

/cc @kubernetes/sig-scalability-misc @wojtek-t @gmarek
2017-08-24 02:32:14 -07:00
Anthony Yeh ce3fad326f
StatefulSet: Deflake e2e "restart" phase.
The test used to scale the StatefulSet down to 0, wait for ListPods to
return 0 matching Pods, and then scale the StatefulSet back up.

This was prone to a race in which StatefulSet was told to scale back up
before it had observed its own deletion of the last Pod, as evidenced by
logs showing the creation of Pod ss-1 prior to the creation of the
replacement Pod ss-0.

We now wait for the controller to observe all deletions before
scaling it back up. This should fix flakes of the form:

```
Too many pods scheduled, expected 1 got 2
```
2017-08-23 15:08:58 -07:00
Kubernetes Submit Queue a44e538dbc Merge pull request #51039 from enisoc/deflake-sts-saturate
Automatic merge from submit-queue

StatefulSet: Deflake e2e "Saturate" phase.

This should reduce one source of flakiness found while investigating #48031.

The "Saturate" phase of StatefulSet e2e tests verifies orderly startup by controlling when each Pod is allowed to report Ready. If a Pod unexepectedly goes down during the test, the replacement Pod
created by the controller will forget if it was already allowed to report Ready.

After this change, the signal that allows each Pod to report Ready is persisted in the Pod's PVC. Thus, the replacement Pod will remember that it was already told to proceed to a Ready state.
2017-08-22 21:13:13 -07:00
Kubernetes Submit Queue fdf14b8218 Merge pull request #50913 from shyamjvs/list-call-slo
Automatic merge from submit-queue (batch tested with PRs 50893, 50913, 50963, 50629, 50640)

Increase latency threshold for list api calls

This is only a short-term solution to make our density test green. In the long-term, we should measure as per our new SLIs.
From @wojtek-t's [doc](https://docs.google.com/document/d/1Q5qxdeBPgTTIXZxdsFILg7kgqWhvOwY8uROEf0j5YBw) on the new SLIs/SLOs, we have the following SLO for list calls:

```
SLO1: In default Kubernetes installation, 99th percentile of SLI2 per cluster-day:
<= 1s if total number of objects of the same type as resource in the system <= X
<= 5s if total number of objects of the same type as resource in the system <= Y
<= 30s if total number of objects of the same types as resource in the system <= Z
```

I would guess that 170,000 pods would fall into the 2nd bracket (at least) and hence the new value of 5s. WDYT?

cc @kubernetes/sig-scalability-misc @wojtek-t @gmarek
2017-08-22 05:31:07 -07:00
Anthony Yeh 3bc7676024
StatefulSet: Deflake e2e "Saturate" phase.
The "Saturate" phase of StatefulSet e2e tests verifies orderly startup
by controlling when each Pod is allowed to report Ready.
If a Pod unexepectedly goes down during the test, the replacement Pod
created by the controller will forget if it was already allowed to
report Ready.

After this change, the signal that allows each Pod to report Ready is
persisted in the Pod's PVC. Thus, the replacement Pod will remember that
it was already told to proceed to a Ready state.
2017-08-21 13:52:15 -07:00
Shyam Jeedigunta bacc01f729 Auto-calculate CLUSTER_IP_RANGE based on no. of nodes 2017-08-21 14:21:43 +02:00
Kubernetes Submit Queue b59ad9cbff Merge pull request #50146 from gmarek/deepcopyinto
Automatic merge from submit-queue (batch tested with PRs 46512, 50146)

Make metav1.(Micro)?Time functions take pointers

Is there any reason for those functions not to be on pointers?
2017-08-19 11:28:15 -07:00
Shyam Jeedigunta 70123e71bb Increase latency threshold for list api calls 2017-08-19 00:55:35 +02:00
Walter Fender cb28f0f34f Add e2e aggregator test.
What this PR does / why we need it:
This adds an e2e test for aggregation based on the sample-apiserver.
Currently is uses a sample-apiserver built as of 1.7.
This should ensure that the aggregation system works end-to-end.
It will also help detect if we break "old" extension api servers.

Which issue this PR fixes (optional, in fixes #<issue number>(, fixes
fixes #43714
Fixed bazel for the change.
Fixed # of args issue from govet.
Added code to test dynamic.Client.
2017-08-17 10:56:43 -07:00
Kubernetes Submit Queue b67b0ad7eb Merge pull request #50768 from shyamjvs/fix-scheduler-metric-in-gke
Automatic merge from submit-queue (batch tested with PRs 50550, 50768)

Don't SSH to master for metrics in case of GKE

cc @kubernetes/sig-scalability-misc @crassirostris
2017-08-17 03:13:59 -07:00
gmarek 0504cfbc25 Make metav1.(Micro)?Time functions take pointers 2017-08-17 11:24:28 +02:00
Shyam Jeedigunta a938c000e3 Don't SSH to master for metrics in case of GKE 2017-08-16 15:24:50 +02:00
Michael Taufen 24bab4c20f move KubeletConfiguration out of componentconfig API group 2017-08-15 08:12:42 -07:00
ymqytw 7500b55ce4 move retry to client-go 2017-08-14 14:16:26 -07:00
Kubernetes Submit Queue f8aee18527 Merge pull request #50484 from sbezverk/ssh_local_ips
Automatic merge from submit-queue (batch tested with PRs 49904, 50484, 50214)

Adding support for internal IP for e2e tests

Currently IssueSSHComand in util.go only checks for External IP address
to ssh, this PR adds check for internal IP too.
Closes #50630
2017-08-14 13:09:56 -07:00
Kevin f76ca1fb16 update clientset.Core() to clientset.CoreV1() in test 2017-08-14 16:53:55 +08:00
Serguei Bezverkhi f41457c151 Adding support for internal IP for e2e tests
Currently IssueSSHComand in util.go only checks for External IP address
to shh, this PR adds check for internal IP too.
2017-08-12 13:43:45 -04:00
Kubernetes Submit Queue b91f19180d Merge pull request #49025 from danwinship/non-cloud-node-ip
Automatic merge from submit-queue (batch tested with PRs 50537, 49699, 50160, 49025, 50205)

When not using a CloudProvider, set both InternalIP and ExternalIP on Nodes

#36095 changed all of the cloudproviders to set both InternalIP and ExternalIP on Nodes, but the non-cloudprovider fallback code now only sets InternalIP.

This causes the test "should be able to create a functioning NodePort service" in test/e2e/service.go to fail on cloud-provider-less clusters, because (with LegacyHostIP gone), it now will only try to work with ExternalIPs, and will fail if the node has only an InternalIP.

There isn't much other code that assumes that ExternalIP will always be set (there's something in pkg/master/master.go, but I don't know what it's doing, so maybe it's only useful in the case where InternalIP != ExternalIP anyway). But given that several of the cloudproviders (mesos, ovirt, rackspace) now explicitly set both InternalIP and ExternalIP to the same value always, it seemed right to do that in the fallback case too.

@deads2k FYI

**Release note**:
```release-note
NONE
```
2017-08-11 19:44:02 -07:00
Jeff Grafton a7f49c906d Use buildozer to delete licenses() rules except under third_party/ 2017-08-11 09:32:39 -07:00
Jeff Grafton 33276f06be Use buildozer to remove deprecated automanaged tags 2017-08-11 09:31:50 -07:00
Kubernetes Submit Queue 524a0e04c4 Merge pull request #50224 from xiangpengzhao/remove-beta-annotations
Automatic merge from submit-queue

Remove deprecated ESIPP beta annotations

**What this PR does / why we need it**:
Remove deprecated ESIPP beta annotations.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #50187

**Special notes for your reviewer**:
/assign @MrHohn
/sig network

**Release note**:

```release-note
Beta annotations `service.beta.kubernetes.io/external-traffic` and `service.beta.kubernetes.io/healthcheck-nodeport` have been removed. Please use fields `service.spec.externalTrafficPolicy` and `service.spec.healthCheckNodePort` instead.
```
2017-08-10 22:55:54 -07:00
Kubernetes Submit Queue 3e8a25e818 Merge pull request #50008 from atlassian/meta-controller-ref
Automatic merge from submit-queue

Migrate to controller references helpers in meta/v1

**What this PR does / why we need it**:
This is a follow up for #48319 that migrates all method usages to new methods in meta/v1.

**Special notes for your reviewer**:
Looking at each commit individually might be easier.

**Release note**:
```release-note
NONE
```
/sig api-machinery
/kind cleanup
2017-08-10 17:07:30 -07:00
Kubernetes Submit Queue eb700d86c5 Merge pull request #50440 from bskiba/kubemark_e2e_open
Automatic merge from submit-queue (batch tested with PRs 45186, 50440)

Add functionality needed by Cluster Autoscaler to Kubemark Provider.

Make adding nodes asynchronous. Add method for getting target
size of node group. Add method for getting node group for node.
Factor out some common code.

**Release note**:
```
NONE
```
2017-08-10 07:31:01 -07:00
Beata Skiba 20a3756024 Add functionality needed by Cluster Autoscaler to Kubemark Provider.
Make adding nodes asynchronous. Add method for getting target
size of node group. Add method for getting node group for node.
Factor out some common code.
2017-08-10 14:37:56 +02:00
Aleksandra Malinowska 55682f2a55 add grabbing CA metrics in e2e tests 2017-08-10 11:22:45 +02:00
Kubernetes Submit Queue 7ef5cc23d1 Merge pull request #46582 from m1093782566/fix-ipt-hard-code
Automatic merge from submit-queue (batch tested with PRs 49642, 50335, 50390, 49283, 46582)

fix iptables mode hard code in e2e test

Fixes #46078
2017-08-10 00:53:28 -07:00
xiangpengzhao ea1a577358 Remove some helpers associated with ESIPP. 2017-08-09 14:25:08 +08:00
Kubernetes Submit Queue 8c4a269b83 Merge pull request #49771 from feiskyer/wait-for-failure
Automatic merge from submit-queue

Add waitForFailure for e2e test framework

**What this PR does / why we need it**:

Add waitForFailure for e2e test framework, this could reduce the reliance on logs.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: 

Part of #44118. Refer https://github.com/kubernetes/kubernetes/pull/48858#discussion_r128331726

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-08-08 20:56:51 -07:00
Kubernetes Submit Queue c9d142d73d Merge pull request #49382 from bskiba/kubemark_e2e_nm
Automatic merge from submit-queue

Add a simple cloud provider for e2e tests on kubemark

**What this PR does / why we need it**:
Adds a simplified cloud provider for kubemark. This enables us to add and
remove nodes and operate on nodegroups while running tests on kubemark.

This is needed to run scalability tests for cluster autoscaler on kubemark.
See https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/proposals/kubemark_integration.md

**Release note**:
```
NONE
```
2017-08-08 08:50:07 -07:00
Kubernetes Submit Queue 0967f9560a Merge pull request #49168 from crimsonfaith91/apps-v1beta2
Automatic merge from submit-queue

StatefulSet scale subresource

**What this PR does / why we need it**: This PR implements scale subresource for StatefulSet.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #46005

**Special notes for your reviewer**:

**Release note**:

```release-note
StatefulSet uses scale subresource when scaling in accord with ReplicationController, ReplicaSet, and Deployment implementations.
```
**Feature Checklist**:
- [x] Introduce Registry interface for storage purpose
- [x] Introduce `ScaleREST New(), Get() and Update()` utility functions
- [x] Create a `ScaleREST` object at `NewREST()` and return it
- [x] Enable scale subresource by adding `/scale` field to the storage map

**Testing Checklist**:
- Unit testing
  - [x] Modify `newStorage()` to call `NewStorage()`, and change all unit tests accordingly
  - [x] Add unit tests for `ScaleREST Get() and Update()` utility functions
  - [x] Add missing unit test for `ShortNames`

- Manual testing
  - [x] Verify existence of the subresource using `kubectl proxy` command
  - [x] Modify the subresource using `curl` via `POST`

- e2e testing
  - [x] Add e2e tests using `RESTClient`
2017-08-07 17:05:24 -07:00
Kubernetes Submit Queue 35eb03e3b4 Merge pull request #49524 from k82cn/k8s_49522
Automatic merge from submit-queue (batch tested with PRs 49524, 46760, 50206, 50166, 49603)

Handled taints on node in batch.

**What this PR does / why we need it**:
Enhanced helpers to handled taints on node in batch.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #49522 

**Release note**:

```release-note
None
```
2017-08-07 13:51:54 -07:00
Jun Xiang Tee 91f100b501 implement statefulset scale subresource 2017-08-07 12:17:46 -07:00
Beata Skiba 2f747f3d3c Add a simple cloud provider for e2e tests on kubemark
This is needed for cluster autoscaler e2e test to
run on kubemark. We need the ability to add and
remove nodes and operate on nodegroups. Kubemark
does not provide this at the moment.
2017-08-07 16:31:02 +02:00
Kubernetes Submit Queue fddc7f3e50 Merge pull request #50243 from mborsz/node
Automatic merge from submit-queue (batch tested with PRs 50091, 50231, 50238, 50236, 50243)

Fix storage tests for multizone test configuration.

**What this PR does / why we need it**:
This PR modifies "[sig-storage] Volumes PD should be mountable with (ext3|ext4)" tests to schedule pods in zone, where PD is created.
This is to make the test work in multizone environment.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:
```release-note
```
2017-08-07 07:15:00 -07:00