Most of the changes that get the test to pass have been made already or
elsewhere. Here we restructure a bit fixing a nesting problem, extend
the timeouts, and start creating distinct backend pods that I'll delete
in the non-local test (coming shortly).
Also some extra debugging info in the DNS code. I made some upstream
changes to skydns in https://github.com/skynetservices/skydns/pull/283
Automatic merge from submit-queue
increase addon check interval
Do static pods have a crash loop back off? If so, this test would be much faster if we restarted the kubelet to clear that.
Fixes#26770
Automatic merge from submit-queue
Fix 7 broken example e2e tests
Fixes#27325, Fixes#27727
7 broken example e2e tests:
- [x] Spark
* `namespace` is specified in example yaml files which conflict with e2e test namespaces, fixed by removing the namespace in yaml (the yaml files of [spark example](https://github.com/kubernetes/kubernetes/tree/master/examples/spark) doesn't need the namespace specified since it's specified in its context) -- cc @k82 who added namespace to Spark example in #23807
* wait for pods to exist before determining if it's running
- [x] Hazelcast
* wait for pods to exist before determining if it's running
- [x] Redis
* image `kubernetes/redis:v2` is not found, changed to `kubernetes/redis:v1` instead
* wait for pods to exist before determining if it's running
- [x] Celery-RabbitMQ
* remove 1 redundant call to `forEachPod`
* wait for pods to exist before determining if it's running
- [x] Cassandra
* fix `kubectl exec` on incorrect pod name
* fix getting endpoint ip addresses before creating pods
* wait for pods to exist before determining if it's running
- [x] Storm
* wait for pods to exist before determining if it's running
- [x] RethinkDB
* wait for pods to exist before determining if it's running
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
Automatic merge from submit-queue
GCE provider: Limit Filter calls to regexps rather than insane blobs
Filters can't exceed 4k, and GET requests against the GCE API are also limited, so these break down in different ways at different cluster counts. Fix it by introducing an advisory `node-instance-prefix` configuration in the GCE provider that can hint the `EnsureLoadBalancer`/`UpdateLoadBalancer code` (and the firewall creation/update code). If it's not there, or wrong (a hostname that's registered violates it), just ignore it and grab the whole project.
Fixes#27731
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
Filters can't exceed 4k, and GET requests against the GCE API are also
limited, so these break down in different ways at different cluster
counts. Fix it by introducing an advisory node-instance-prefix
configuration in the GCE provider that can hint the
EnsureLoadBalancer/UpdateLoadBalancer code (and the firewall
creation/update code). If it's not there, or wrong (a hostname that's
registered violates it), just ignore it and grab the whole project.
Automatic merge from submit-queue
WaitForRunningReady also waits for PodsSuccess
Ref. #27095 - fixes the test, doesn't fix the problem.
cc @yujuhong @fejta
Automatic merge from submit-queue
Filter seccomp profile path from malicious .. and /
Without this patch with `localhost/<some-releative-path>` as seccomp profile one can load any file on the host, e.g. `localhost/../../../../dev/mem` which is not healthy for the kubelet.
/cc @jfrazelle
Unit tests depend on https://github.com/kubernetes/kubernetes/pull/26710.
Automatic merge from submit-queue
Revert revert of downward api node defaults
Reverts the revert of https://github.com/kubernetes/kubernetes/pull/27439Fixes#27062
@dchen1107 - who at Google can help debug why this caused issues with GKE infrastructure but not GCE merge queue?
/cc @wojtek-t @piosz @fgrzadkowski @eparis @pmorie
Automatic merge from submit-queue
Cleanups following #27587
- Add back the negative assertions, but mark them [Slow].
- Use the current DNS TTL of 180 sec as our timeout for all DNS tests.
- Assorted cleanups and refactoring.
Automatic merge from submit-queue
Extend ingress e2e
Splits the test into a cross platform conformance list, and platform specific bits that exercise features through annotations. Also exercises the features in https://github.com/kubernetes/contrib/pull/1133. Assigning to Girish, simply because I assigned the other pr to Minhan.
- Dropped the regex test and just test for nslookup exiting 0.
- Moved more setup into BeforeEach and used nested Context for non-local
case.
- Poll inside the container using a bash loop.
- Aim for less console noise unless something goes wrong.
- Commented out the tests trying to verify that a DNS name is absent.
Automatic merge from submit-queue
in each pd test, create and delete the pod for every iteration to give new pod name for exec
fix#26141
based on chat with @ncdc
The following is a snapshot of the log. Each iteration now has a new Pod name
```text
[It] should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] [Flaky]
/srv/dev/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pd.go:277
STEP: creating PD1
Jun 10 15:55:45.878: INFO: Successfully created a new PD: "rootfs-e2e-c8b82df9-2f23-11e6-a5a0-b8ca3a62792c".
STEP: creating PD2
Jun 10 15:55:49.794: INFO: Successfully created a new PD: "rootfs-e2e-cb135362-2f23-11e6-a5a0-b8ca3a62792c".
Jun 10 15:55:49.794: INFO: PD Read/Writer Iteration #0
STEP: submitting host0Pod to kubernetes
W0610 15:55:49.860308 17282 request.go:347] Field selector: v1 - pods - metadata.name - pd-test-cd68f34b-2f23-11e6-a5a0-b8ca3a62792c: need to check if this is versioned correctly.
STEP: writing a file in the container
Jun 10 15:56:09.792: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-cd68f34b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- /bin/sh -c echo '988876932586416926' > '/testpd1/tracker0''
Jun 10 15:56:12.003: INFO: Wrote value: "988876932586416926" to PD1 ("rootfs-e2e-c8b82df9-2f23-11e6-a5a0-b8ca3a62792c") from pod "pd-test-cd68f34b-2f23-11e6-a5a0-b8ca3a62792c" container "mycontainer"
STEP: writing a file in the container
Jun 10 15:56:12.003: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-cd68f34b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- /bin/sh -c echo '8414937992264649637' > '/testpd2/tracker0''
Jun 10 15:56:13.170: INFO: Wrote value: "8414937992264649637" to PD2 ("rootfs-e2e-cb135362-2f23-11e6-a5a0-b8ca3a62792c") from pod "pd-test-cd68f34b-2f23-11e6-a5a0-b8ca3a62792c" container "mycontainer"
STEP: reading a file in the container
Jun 10 15:56:13.170: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-cd68f34b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker0'
Jun 10 15:56:14.325: INFO: Read file "/testpd1/tracker0" with content: 988876932586416926
STEP: reading a file in the container
Jun 10 15:56:14.325: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-cd68f34b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker0'
Jun 10 15:56:15.590: INFO: Read file "/testpd2/tracker0" with content: 8414937992264649637
STEP: deleting host0Pod
Jun 10 15:56:15.841: INFO: PD Read/Writer Iteration #1
STEP: submitting host0Pod to kubernetes
W0610 15:56:15.905485 17282 request.go:347] Field selector: v1 - pods - metadata.name - pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c: need to check if this is versioned correctly.
STEP: reading a file in the container
Jun 10 15:56:16.832: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker0'
Jun 10 15:56:18.132: INFO: Read file "/testpd1/tracker0" with content: 988876932586416926
STEP: reading a file in the container
Jun 10 15:56:18.132: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker0'
Jun 10 15:56:19.354: INFO: Read file "/testpd2/tracker0" with content: 8414937992264649637
STEP: writing a file in the container
Jun 10 15:56:19.354: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- /bin/sh -c echo '7639503234625274799' > '/testpd1/tracker1''
Jun 10 15:56:20.526: INFO: Wrote value: "7639503234625274799" to PD1 ("rootfs-e2e-c8b82df9-2f23-11e6-a5a0-b8ca3a62792c") from pod "pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c" container "mycontainer"
STEP: writing a file in the container
Jun 10 15:56:20.526: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- /bin/sh -c echo '7400445987108171911' > '/testpd2/tracker1''
Jun 10 15:56:21.694: INFO: Wrote value: "7400445987108171911" to PD2 ("rootfs-e2e-cb135362-2f23-11e6-a5a0-b8ca3a62792c") from pod "pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c" container "mycontainer"
STEP: reading a file in the container
Jun 10 15:56:21.694: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker0'
Jun 10 15:56:22.904: INFO: Read file "/testpd1/tracker0" with content: 988876932586416926
STEP: reading a file in the container
Jun 10 15:56:22.905: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker0'
Jun 10 15:56:24.080: INFO: Read file "/testpd2/tracker0" with content: 8414937992264649637
STEP: reading a file in the container
Jun 10 15:56:24.081: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker1'
Jun 10 15:56:25.290: INFO: Read file "/testpd1/tracker1" with content: 7639503234625274799
STEP: reading a file in the container
Jun 10 15:56:25.290: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-dcef71e1-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker1'
Jun 10 15:56:26.491: INFO: Read file "/testpd2/tracker1" with content: 7400445987108171911
STEP: deleting host0Pod
Jun 10 15:56:26.756: INFO: PD Read/Writer Iteration #2
STEP: submitting host0Pod to kubernetes
W0610 15:56:26.821828 17282 request.go:347] Field selector: v1 - pods - metadata.name - pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c: need to check if this is versioned correctly.
STEP: reading a file in the container
Jun 10 15:56:27.898: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker1'
Jun 10 15:56:29.096: INFO: Read file "/testpd1/tracker1" with content: 7639503234625274799
STEP: reading a file in the container
Jun 10 15:56:29.096: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker1'
Jun 10 15:56:30.325: INFO: Read file "/testpd2/tracker1" with content: 7400445987108171911
STEP: reading a file in the container
Jun 10 15:56:30.325: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker0'
Jun 10 15:56:31.528: INFO: Read file "/testpd1/tracker0" with content: 988876932586416926
STEP: reading a file in the container
Jun 10 15:56:31.529: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker0'
Jun 10 15:56:32.972: INFO: Read file "/testpd2/tracker0" with content: 8414937992264649637
STEP: writing a file in the container
Jun 10 15:56:32.972: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- /bin/sh -c echo '1846555975530999997' > '/testpd1/tracker2''
Jun 10 15:56:34.157: INFO: Wrote value: "1846555975530999997" to PD1 ("rootfs-e2e-c8b82df9-2f23-11e6-a5a0-b8ca3a62792c") from pod "pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c" container "mycontainer"
STEP: writing a file in the container
Jun 10 15:56:34.157: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- /bin/sh -c echo '2775947264799611726' > '/testpd2/tracker2''
Jun 10 15:56:35.661: INFO: Wrote value: "2775947264799611726" to PD2 ("rootfs-e2e-cb135362-2f23-11e6-a5a0-b8ca3a62792c") from pod "pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c" container "mycontainer"
STEP: reading a file in the container
Jun 10 15:56:35.662: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker0'
Jun 10 15:56:36.868: INFO: Read file "/testpd1/tracker0" with content: 988876932586416926
STEP: reading a file in the container
Jun 10 15:56:36.868: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker0'
Jun 10 15:56:38.062: INFO: Read file "/testpd2/tracker0" with content: 8414937992264649637
STEP: reading a file in the container
Jun 10 15:56:38.062: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker1'
Jun 10 15:56:39.221: INFO: Read file "/testpd1/tracker1" with content: 7639503234625274799
STEP: reading a file in the container
Jun 10 15:56:39.221: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker1'
Jun 10 15:56:40.397: INFO: Read file "/testpd2/tracker1" with content: 7400445987108171911
STEP: reading a file in the container
Jun 10 15:56:40.397: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker2'
Jun 10 15:56:41.584: INFO: Read file "/testpd1/tracker2" with content: 1846555975530999997
STEP: reading a file in the container
Jun 10 15:56:41.585: INFO: Running '/srv/dev/kubernetes/_output/local/bin/linux/amd64/kubectl exec --namespace=e2e-tests-pod-disks-2tvm2 pd-test-e370dd2b-2f23-11e6-a5a0-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker2'
Jun 10 15:56:42.800: INFO: Read file "/testpd2/tracker2" with content: 2775947264799611726
STEP: deleting host0Pod
```
@saad-ali
Automatic merge from submit-queue
Dumping logs of federation pods (federation-apiserver, federation-controller-manager) on e2e test failure
Ref https://github.com/kubernetes/kubernetes/issues/26762
This should help with debugging failures.
Right now there is no way to access those logs.
@kubernetes/sig-cluster-federation @colhom
Automatic merge from submit-queue
Call NewFramework constructor instead of hand creating framework.
https://github.com/kubernetes/kubernetes/issues/27486, probably because we defined a new clientConfigGetter for node e2es and this test was hand creating the framework.
Automatic merge from submit-queue
Kubelet Volume Attach/Detach/Mount/Unmount Redesign
This PR redesigns the Volume Attach/Detach/Mount/Unmount in Kubelet as proposed in https://github.com/kubernetes/kubernetes/issues/21931
```release-note
A new volume manager was introduced in kubelet that synchronizes volume mount/unmount (and attach/detach, if attach/detach controller is not enabled).
This eliminates the race conditions between the pod creation loop and the orphaned volumes loops. It also removes the unmount/detach from the `syncPod()` path so volume clean up never blocks the `syncPod` loop.
```
Automatic merge from submit-queue
federation: choosing a default federation name in test instead of failing
The tests are failing right now:
http://kubekins.dls.corp.google.com/job/kubernetes-e2e-gce-federation/
```
[k8s.io] Service [Feature:Federation] should be able to discover a non-local federated service
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/federated-service.go:130 Jun 14 12:40:35.091: FEDERATION_NAME environment variable must be set
[k8s.io] Service [Feature:Federation] should be able to discover a federated service
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/federated-service.go:130 Jun 14 12:40:40.802: FEDERATION_NAME environment variable must be set
```
This is to fix them.
cc @kubernetes/sig-cluster-federation @mml
This commit adds a new volume manager in kubelet that synchronizes
volume mount/unmount (and attach/detach, if attach/detach controller
is not enabled).
This eliminates the race conditions between the pod creation loop
and the orphaned volumes loops. It also removes the unmount/detach
from the `syncPod()` path so volume clean up never blocks the
`syncPod` loop.
Automatic merge from submit-queue
Make timeout for starting system pods configurable
Context: in 2000-node clusters (if only one node is big enough to fit heapster, which is our testing configuration), heapster won't be scheduled until that node has route. However, creating routes is pretty expensive and currently can take even 2 hours.
@zmerlynn @gmarek
The number of pods to start must be non-zero.
Otherwise the function waits for pods forever if waitForRunning is true.
It the number of replicas is zero, panic so the mistake is heard all over the e2e realm.
Update all callers of StartPods to test for non-zero number of replicas.
Automatic merge from submit-queue
Updating federation up scripts to work in non e2e setup
Ref: https://github.com/kubernetes/kubernetes.github.io/pull/656
Updating the federation up scripts so that they work as per steps in https://github.com/kubernetes/kubernetes.github.io/pull/656.
Changes are:
* Updating the default namespace to be "federation" instead of "federation-e2e"
* Updated the kubeconfig context to be named "federation-cluster" instead of "federated-context"
* Fixing federation-up so that FEDERATION_IMAGE_TAG is set even when federation-up is run without running `e2e.go --up`. e2e-up.sh sets it here: 6a388d4a0d/hack/e2e-internal/e2e-up.sh (L44).
* Adding a "missingkey=zero" option to template parser. Without this, the parser adds `"<no value>"` at the place of an env var that is not set. With this change, it instead replaces it with the corresponding zero value (for ex "" for strings). This is required for the FEDERATION_DNS_PROVIDER_CONFIG env var.
cc @kubernetes/sig-cluster-federation @colhom @mml
Automatic merge from submit-queue
Implement first set of federated service e2e tests.
These tests are untested and there is no guarantee that they work. The ongoing auth problems is blocking these e2es from being tested and upon @quinton-hoole's request I am submitting them now.
Only the last commit here needs review.
Depends on #26953
cc @nikhiljindal @colhom @mfanjie @kubernetes/sig-cluster-federation
Note that these tests are untested and there is no guarantee that they work.
The ongoing auth problems is blocking these e2es from being tested and upon
@quinton-hoole's request I am submitting them now.
Automatic merge from submit-queue
e2e: actually check error when we fail to GET the scheduled pod
and GET the pod before we try to gracefully delete it.
and add a debug log.
ref #26224
Automatic merge from submit-queue
Port the downward api test to the node e2e suite
Also extend the framework to allow a custom client config loading function, so
that the node e2e suite can reuse the same framework across tests.
This fixes#26609
/cc @timstclair @pwittrock
Automatic merge from submit-queue
Add e2e test for node problem detector
Based on https://github.com/kubernetes/node-problem-detector/pull/12.
This PR added e2e test for node problem detector. It starts a node problem detector with test configuration, and test its functionality.
Question:
* Should this be a node e2e test or e2e test?
* Should we mark this as Serial? The test should not affect other tests, but it will add a `TestConditon` in node conditions and remove in cleanup.
@dchen1107
/cc @kubernetes/sig-node
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
Automatic merge from submit-queue
Listing pods only once when getting pods for RS in deployment
Fixes#26834
1. Avoid ranging over RSes and then `List` pods of each RS. Instead, `List` pods of the deployment once, and then filter pods of each RS.
2. Avoid using clientset to `List` pods in deployment controller. Use podStore instead. (TODO in some functions because the unit tests don't have podStore.)
@kubernetes/deployment
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
Automatic merge from submit-queue
support for mounting local-ssds on GCI
This change adds support for mounting local ssds on GCI.
It updates the previous container-vm behavior as well to
match that for GCI nodes by mounting the local-ssds under
the same path (/mnt/disks/ssdN).
@vulpecula @roberthbailey @andyzheng0831 @kubernetes/goog-image
This change adds support for mounting local ssds on GCI.
It updates the previous container-vm behavior as well to
match that for GCI nodes by mounting the local-ssds under
the same path (/mnt/disks/ssdN).
Automatic merge from submit-queue
Move quota usage testing for loadbalancers into unit tests
Fixes https://github.com/kubernetes/kubernetes/issues/26319
* moved testing for node port and load balancer usage in quota to unit tests
* remove node port and node port -> loadbalancer service testing out of e2e
* covered already in replenishment_controller_test scenario
Given the time it takes to even allocate a load balancer, it seems better to test that outside of this test case to avoid unnecessary flakes.
/cc @bprashanth
Automatic merge from submit-queue
Flake 26210: decouple explicit access from port 80
Flake #26210 only happens for port 80. To decouple the possible causes, all
tests with explicit port 80 are moved to port 1080 (these were 80% of the flakes).
The urls without a specified port (which map to port 80 though) are left untouched.
If port 1080 does not show up as flake now, there is really a connection to the
actual port number.
Now that GCE routes take an extremely long time to come up and there's
a variance in "Ready" and "Schedulable", start cherry-picking tests
where we really want to have all nodes routable/schedulable for
testing. Adding logging. This will increase test times on large
clusters but should have 0 impact on normal testing.
Flake #26210 only happens for port 80. To decouple the possible causes, all
tests with explicit port 80 are moved to port 1080 (these were 80% of the flakes).
The urls without a specified port (which map to port 80 though) are left untouched.
If port 1080 does not show up as flake now, there is really a connection to the
actual port number.
The image prepulling pod calls docker directly to pull images. If the pod
hasn't finished before running the resource usage tracking test, there'd be a
cpu spike in docker. We'd rather wait and fail if this is the case, before
running the test.
Automatic merge from submit-queue
Don't allow deps with no discernible license
This updates the few deps we had with no LICENSE file to current versions that do have that file. It also disallows new deps without obvious licenses.
Automatic merge from submit-queue
Disable PodAffinity SchedulerPredicates test
This feature is disabled, so it's not surprising that tests don't work.
cc @davidopp @kevin-wangzefeng
@david-mcmahon - this disables the second test that causes failures in SchedulerPredicates suite. When this and #26695 are merged it should be passing in serial.
Automatic merge from submit-queue
Revert revert of adding resource constraints for master components in density tests
The problem was the time when resource constraints were generated. It turns out that the provider is not set there. This version should work.
cc @roberthbailey @alex-mohr
Automatic merge from submit-queue
Add direct serializer
Fix#25589. Implemented a direct codec that doesn't do conversion, but sets the group, version and kind before serialization as Clayton suggested [here](https://github.com/kubernetes/kubernetes/issues/25589#issuecomment-219168009).
First commit is cherry-picked from #24826.
@kubernetes/sig-api-machinery
Automatic merge from submit-queue
kubelet e2e: bumping cpu limit
The previous limit was too aggressive and caused kubernetes-e2e-gce-serial build 1404 to fail.
Automatic merge from submit-queue
Change Kubelet test to run also in large clusters
This should fix the recent failure of kubelet test in 2000-node cluster.
@zmerlynn - FYI
Automatic merge from submit-queue
SSH e2e: Limit to 100 nodes, limit combinatorics
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()This limits the "for all hosts" to 100 nodes, and also limits the
combinatorial section so that we only do the other SSH command variant
testing on the first host rather than *all* of the hosts. I also
killed one of the variants because it didn't seem to be testing much
important.
Fixes#26600
This limits the "for all hosts" to 100 nodes, and also limits the
combinatorial section so that we only do the other SSH command variant
testing on the first host rather than *all* of the hosts. I also
killed one of the variants because it didn't seem to be testing much
important.
Fixes#26600
Automatic merge from submit-queue
kubelet e2e: set cpu/memory limits for docker 1.11
Docker 1.11 consumes more memory. Bump the limit to fix the tests. Also add
new limits for the 100-pod resource usage tracking test.
This fixes#26495
Automatic merge from submit-queue
Fix some gce-only tests to run on gke as well
Enable "Services should work after restarting apiserver [Disruptive]" and DaemonRestart tests, except the 2 that require master ssh access.
Move restart/upgrade related test helpers into their own file in framework package.
Automatic merge from submit-queue
Use pause image depending on the server's platform when testing
Removed all pause image constant strings, now the pause image is chosen by arch. Part of the effort of making e2e arch-agnostic.
The pause image name and version is also now only in two places, and it's documented to bump both
Also removed "amd64" constants in the code. Such constants should be replaced by `runtime.GOARCH` or by looking up the server platform
Fixes: #22876 and #15140
Makes it easier for: #25730
Related: #17981
This is for `v1.3`
@ixdy @thockin @vishh @kubernetes/sig-testing @andyzheng0831 @pensu
Automatic merge from submit-queue
Flake 21484: retrieve pod log during e2e error
Print the pod log when an error occurs in
> Proxy version 1 should proxy through a service and a pod [Conformance]
e2e test. This will help to understand flake https://github.com/kubernetes/kubernetes/issues/21484 better.
Many tests expect all kube-system pods to be running and ready. The newly
added image prepull add-on pod can in the "succeeded" state. This commit fixes
the tests to allow kube-system pods to be succeeded.
Automatic merge from submit-queue
Downward API implementation for resources limits and requests
This is an implementation of Downward API for resources limits and requests, and it works with environment variables and volume plugin.
This is based on proposal https://github.com/kubernetes/kubernetes/pull/24051. This implementation follows API with magic keys approach as discussed in the proposal.
@kubernetes/rh-cluster-infra
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/24179)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Prepull images in e2e
Quick and dirty image puller because the SQ stalled multiple times just *today* on image pull flake (https://github.com/kubernetes/kubernetes/issues/25277).
@kubernetes/sig-node @kubernetes/sig-testing wdyt?
Automatic merge from submit-queue
in e2e test, when kubectl exec fails to find the container to run a command, it should retry
fix#26076
Without retrying upon "container not found" error, `Pod Disks` test failed on the following error:
```console
[k8s.io] Pod Disks
should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:271
[BeforeEach] [k8s.io] Pod Disks
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:108
STEP: Creating a kubernetes client
May 23 19:18:02.254: INFO: >>> TestContext.KubeConfig: /root/.kube/config
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pod Disks
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:69
[It] should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:271
STEP: creating PD1
May 23 19:18:06.678: INFO: Successfully created a new PD: "rootfs-e2e-11dd5f5b-211b-11e6-a3ff-b8ca3a62792c".
STEP: creating PD2
May 23 19:18:11.216: INFO: Successfully created a new PD: "rootfs-e2e-141f062d-211b-11e6-a3ff-b8ca3a62792c".
May 23 19:18:11.216: INFO: PD Read/Writer Iteration #0
STEP: submitting host0Pod to kubernetes
W0523 19:18:11.279910 4984 request.go:347] Field selector: v1 - pods - metadata.name - pd-test-16d3653c-211b-11e6-a3ff-b8ca3a62792c: need to check if this is versioned correctly.
STEP: writing a file in the container
May 23 19:18:39.088: INFO: Running '/srv/dev/kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://130.211.199.187 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-pod-disks-3t3g8 pd-test-16d3653c-211b-11e6-a3ff-b8ca3a62792c -c=mycontainer -- /bin/sh -c echo '1394466581702052925' > '/testpd1/tracker0''
May 23 19:18:40.250: INFO: Wrote value: "1394466581702052925" to PD1 ("rootfs-e2e-11dd5f5b-211b-11e6-a3ff-b8ca3a62792c") from pod "pd-test-16d3653c-211b-11e6-a3ff-b8ca3a62792c" container "mycontainer"
STEP: writing a file in the container
May 23 19:18:40.251: INFO: Running '/srv/dev/kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://130.211.199.187 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-pod-disks-3t3g8 pd-test-16d3653c-211b-11e6-a3ff-b8ca3a62792c -c=mycontainer -- /bin/sh -c echo '1740704063962701662' > '/testpd2/tracker0''
May 23 19:18:41.433: INFO: Wrote value: "1740704063962701662" to PD2 ("rootfs-e2e-141f062d-211b-11e6-a3ff-b8ca3a62792c") from pod "pd-test-16d3653c-211b-11e6-a3ff-b8ca3a62792c" container "mycontainer"
STEP: reading a file in the container
May 23 19:18:41.433: INFO: Running '/srv/dev/kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://130.211.199.187 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-pod-disks-3t3g8 pd-test-16d3653c-211b-11e6-a3ff-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker0'
May 23 19:18:42.585: INFO: Read file "/testpd1/tracker0" with content: 1394466581702052925
STEP: reading a file in the container
May 23 19:18:42.585: INFO: Running '/srv/dev/kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://130.211.199.187 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-pod-disks-3t3g8 pd-test-16d3653c-211b-11e6-a3ff-b8ca3a62792c -c=mycontainer -- cat /testpd2/tracker0'
May 23 19:18:43.779: INFO: Read file "/testpd2/tracker0" with content: 1740704063962701662
STEP: deleting host0Pod
May 23 19:18:44.048: INFO: PD Read/Writer Iteration #1
STEP: submitting host0Pod to kubernetes
W0523 19:18:44.132475 4984 request.go:347] Field selector: v1 - pods - metadata.name - pd-test-16d3653c-211b-11e6-a3ff-b8ca3a62792c: need to check if this is versioned correctly.
STEP: reading a file in the container
May 23 19:18:45.186: INFO: Running '/srv/dev/kubernetes/_output/dockerized/bin/linux/amd64/kubectl kubectl --server=https://130.211.199.187 --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-pod-disks-3t3g8 pd-test-16d3653c-211b-11e6-a3ff-b8ca3a62792c -c=mycontainer -- cat /testpd1/tracker0'
May 23 19:18:46.290: INFO: error running kubectl exec to read file: exit status 1
stdout=
stderr=error: error executing remote command: error executing command in container: container not found ("mycontainer")
)
May 23 19:18:46.290: INFO: Error reading file: exit status 1
May 23 19:18:46.290: INFO: Unexpected error occurred: exit status 1
```
Now I've run this fix on e2e pd test 5 times and no longer see any failure
Automatic merge from submit-queue
E2e tests for GKE cluster with local SSD.
The test cover node pool with local SSD creation and scheduling a pod that writes and reads from it. Pod access local disk via hostPath.
```release-note
E2e tests for GKE cluster with local SSD.
-OR-
```
Automatic merge from submit-queue
Extend secrets volumes with path control
As per [1] this PR extends secrets mapped into volume with:
* key-to-path mapping the same way as is for configmap. E.g.
```
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "mypod",
"namespace": "default"
},
"spec": {
"containers": [{
"name": "mypod",
"image": "redis",
"volumeMounts": [{
"name": "foo",
"mountPath": "/etc/foo",
"readOnly": true
}]
}],
"volumes": [{
"name": "foo",
"secret": {
"secretName": "mysecret",
"items": [{
"key": "username",
"path": "my-username"
}]
}
}]
}
}
```
Here the ``spec.volumes[0].secret.items`` added changing original target ``/etc/foo/username`` to ``/etc/foo/my-username``.
* secondly, refactoring ``pkg/volumes/secrets/secrets.go`` volume plugin to use ``AtomicWritter`` to project a secret into file.
[1] https://github.com/kubernetes/kubernetes/blob/master/docs/design/configmap.md#changes-to-secret
I removed the netexec and goproxy pods from the proxy exec test. Instead
it now runs kubectl locally and the proxy is running in-process. Since
Go won't proxy for localhost requests, this test cannot pass if the API
server is local. However it was already disabled for local clusters.
Automatic merge from submit-queue
SchedulerPredicates e2e test: be more verbose about requested resource
When ``validates resource limits of pods that are allowed to run [Conformance]`` test is run, logs could give more information about requested resource and say it is for cpu and in mili units.
cpu is stored in m units here:
```
nodeToCapacityMap[node.Name] = capacity.MilliValue()
```
The key to path mapping allows pod to specify different name (thus location) of each secret.
At the same time refactor the volume plugin to use AtomicWritter to project secrets to files in a volume.
Update e2e Secrets test, the secret file permission has changed from 0444 to 0644
Remove TestPluginIdempotent as the AtomicWritter is responsible for secret creation
Automatic merge from submit-queue
Add init containers to pods
This implements #1589 as per proposal #23666
Incorporates feedback on #1589, creates parallel structure for InitContainers and Containers, adds validation for InitContainers that requires name uniqueness, and comments on a number of implications of init containers.
This is a complete alpha implementation.
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/23567)
<!-- Reviewable:end -->
Automatic merge from submit-queue
[e2e] kubectl stdin
Problem: Currently kubectl heavily relies on files which have to be (for lack of a better word :):):)) "written" to the file system. This hinders adoption of something like gobindata, by forcing an intermediary generated-assets directory type thing.
Solution: Lets migrate `kubectl.go` testing over to using standard input streams.
cc @kubernetes/sig-testing @timothysc
Automatic merge from submit-queue
Add pod condition PodScheduled to detect situation when scheduler tried to schedule a Pod, but failed
Set `PodSchedule` condition to `ConditionFalse` in `scheduleOne()` if scheduling failed and to `ConditionTrue` in `/bind` subresource.
Ref #24404
@mml (as it seems to be related to "why pending" effort)
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/24459)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Move internal types of hpa from pkg/apis/extensions to pkg/apis/autoscaling
ref #21577
@lavalamp could you please review or delegate to someone from CSI team?
@janetkuo could you please take a look into the kubelet changes?
cc @fgrzadkowski @jszczepkowski @mwielgus @kubernetes/autoscaling
Automatic merge from submit-queue
Upgrades tests use chaosmonkey package and ServiceTestJig
Introduce the `chaosmonkey` e2e package for doing disruptive testing (e.g. upgrade testing) more easily.
- [x] `chaosmonkey` package
- [x] migrate upgrade tests to `chaosmonkey` (using WIP `serviceJig`)
- [x] migrate upgrade tests to use `ServiceTestJig` and `chaosmonkey`
Deferred:
- [ ] make `ServiceTestJig` implement `chaosmonkey.Interface`
- [ ] migrate disruptive services tests to use `ServiceTestJig` and `chaosmonkey`
This provides the extensible framework for #15131. We should now easily be able to add tests (e.g. #6084, #23189).
This is a rewrite of #22446.
cc @mikedanese @quinton-hoole @roberthbailey
Assigning to @thockin, who wrote `ServiceTestJig`.
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/24014)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Kubelet: Add docker operation timeout
For #23563.
Based on #24748, only the last 2 commits are new.
This PR:
1) Add timeout for all docker operations.
2) Add docker operation timeout metrics
3) Cleanup kubelet stats and add runtime operation error and timeout rate monitoring.
4) Monitor runtime operation error and timeout rate in kubelet perf.
@yujuhong
/cc @gmarek Because of the metrics change.
/cc @kubernetes/sig-node
Automatic merge from submit-queue
test/e2e/addon_update: Respect KUBE_SSH_USER
This change makes the e2e tests more consistent as the other ones all already respected this variable.
I didn't do the larger change of re-factoring it to use `framework.SSH` because getting the `scp` portion working there has some significant complexity. If I find time, I'd like to go back and do it since this test needs a little ❤️
cc @yifan-gu, @kubernetes/sig-testing
Automatic merge from submit-queue
kubectl describe: show multiple labels/annotations on multiple lines
Small UX improvement: when there is more than one label/annotation, it's more readable to see them on the different lines.
Before:
```console
$ kubectl describe svc
Name: s2i-test
Namespace: test2
Labels: app=s2i-test,foo=bar
...
```
After:
```console
$ kubectl describe svc
Name: s2i-test
Namespace: test2
Labels: app=s2i-test
foo=bar
...
```
This change affects output of the labels/annotations in many of the sub-commands of the `kubectl describe`.
PTAL @smarterclayton @kargakis
Automatic merge from submit-queue
kubectl: more sophisticated pod selection for logs and attach
Trying to get the logs or attach to an object other than a pod
will poll forever if that object has no replicas. This commit adds
a 20s timeout for polling.
@kubernetes/kubectl @deads2k @fabianofranz
Automatic merge from submit-queue
Reimplement 'pause' in C - smaller footprint all around
Statically links against musl. Size of amd64 binary is 3560 bytes.
I couldn't test the arm binary since I have no hardware to test it on, though I assume we want it to work on a raspberry pi.
This PR also adds the gcc5/musl cross compiling image used to build the binaries.
@thockin
Automatic merge from submit-queue
Add subPath to mount a child dir or file of a volumeMount
Allow users to specify a subPath in Container.volumeMounts so they can use a single volume for many mounts instead of creating many volumes. For instance, a user can now use a single PersistentVolume to store the Mysql database and the document root of an Apache server of a LAMP stack pod by mapping them to different subPaths in this single volume.
Also solves https://github.com/kubernetes/kubernetes/issues/20466.