Automatic merge from submit-queue
Fix killing child sudo process in e2e_node tests
Fixes#29211.
The context is we are trying to kill a process started as `sudo kube-apiserver`, but `sudo` ignores signals from the same process group. Applying `Setpgid` means the `sudo kill` process won't be in the same process group, so will not fall foul of this nifty feature.
I also took the liberty of removing some code setting `Pdeathsig` because it claims to be doing something in the same area, but actually it doesn't do that at all. The setting is applied to the forked process, i.e. `sudo`, and it means the `sudo` will get killed if we (`e2e_node.test`) die. This (a) isn't what the comment says and (b) doesn't help because sending SIGKILL to the sudo process leaves sudo's child alive.
I didn't use the "hack for linux-only" approach because I think `Setpgid` is available on all platforms that `e2e_node` builds on.
Automatic merge from submit-queue
Faster test
<!--
Checklist for submitting a Pull Request
Please remove this comment block before submitting.
1. Please read our [contributor guidelines](https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md).
2. See our [developer guide](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md).
3. If you want this PR to automatically close an issue when it is merged,
add `fixes #<issue number>` or `fixes #<issue number>, fixes #<issue number>`
to close multiple issues (see: https://github.com/blog/1506-closing-issues-via-pull-requests).
4. Follow the instructions for [labeling and writing a release note for this PR](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes) in the block below.
-->
In attempting to troubleshoot flakes with this test case I actually wanted to understand how it worked.
There's some poor comments that need work.
I added some additional output which may or may not help in debugging the flakes.
I doubt this fixes the flake.
My major concern is the 'refactor' I did of the test case to batch up runs by sub-test-case. As it stood there was a 200ms pause between each sub, so they should not have interfered with each other. Now they are just started as fast as possible, but only 20 run at a time before moving on to the next 20. I am not sure if I am violating the ethos of the original test case.
Runs on my computer are down from 2m40s -> 40s.
Getting rid of the arbitrary client limiting brings it down to ~12 seconds. 11 to fetch the image and <1 to actually run the tests against the proxies. I can add a zero to the number of loops if you want to hit it harder. It would result in 10x as much text output though.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
Automatic merge from submit-queue
Add support for kubectl create quota command
Follow-up of https://github.com/kubernetes/kubernetes/pull/19625
```
Create a resourcequota with the specified name, hard limits and optional scopes
Usage:
kubectl create quota NAME [--hard=key1=value1,key2=value2] [--scopes=Scope1,Scope2] [--dry-run=bool] [flags]
Aliases:
quota, q
Examples:
// Create a new resourcequota named my-quota
$ kubectl create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10
// Create a new resourcequota named best-effort
$ kubectl create quota best-effort --hard=pods=100 --scopes=BestEffort
```
Automatic merge from submit-queue
Rework pod waiting mechanism in e2e tests to accept pod and watch based
This PR re-applies #28212 which was reverted in #29223. The only difference is that the initial PR contained also `PodStartTimeout` shortening (see [here](4b0c0bd924)) which might caused the problems. Let's give it a 2nd try. I've tested all the flakes and they were passing on my machine.
@smarterclayton @apelisse ptal
- what the test is doing
- how the test is set up
- subsections of the test setup
additional output
- print time spent getting ready to run proxy attempts
- number of test cases
- multiple attempts of each test case
- how many total proxying attempts will be made
- fast path output now has numerical identity of attempt like error output
- error output has time taken and http status like fast path output
batching runs
- run groups of test cases vs starting all 34*20=680 proxy attempts at
the same time.
- don't wait between starting proxy attempts anymore.
proxy e2e changes
- disable the client side rate limiter
- use `By` construct of ginkgo for inline `STEP` logging
- move the waitGroup add outside of the loop
Automatic merge from submit-queue
Syncing imaging pulling backoff logic
- Syncing the backoff logic in the parallel image puller and the sequential image puller to prepare for merging the two pullers into one.
- Moving image error definitions under kubelet/images
Automatic merge from submit-queue
Change SETUP_NODE to True for node e2e docker validation test.
The continuous node e2e docker validation test is failing because:
```
W0722 00:48:52.163940 1265 image_list.go:85] Could not pre-pull image gcr.io/google_containers/netexec:1.4 exit status 1 output: Cannot connect to the Docker daemon. Is the docker daemon running on this host?
```
This is because jenkins is not added to docker user group.
For other images tested in node e2e, jenkins is added to docker user group when the images are initially created https://github.com/kubernetes/kubernetes/blob/master/test/e2e_node/environment/setup_host.sh#L102.
However, in node e2e docker validation test, we are using GCI image which doesn't do that.
So we should use the `SETUP_NODE` option to add user to docker group before test running b6c87904f6/test/e2e_node/e2e_remote.go (L150-L159).
This is only one line change, could you help me review the PR? @wonderfly
Thanks a lot! :)
Automatic merge from submit-queue
test/e2e: plug time.Ticker resource leak.
This commit ensures that `logPodStartupStatus` does not leak
running `time.Ticker` instances. Upon termination of the consuming
routine, we stop the ticker.
Automatic merge from submit-queue
use regular client instead of kubectl in scheduler predicate tests when checking/setting/cleanning taints/labels
The existing implementation in scheduler predicate tests uses kubectl to check/set/clean taints/labels on node, which makes the test very related to kubectl.
This PR is to use regular client instead.
Automatic merge from submit-queue
Revert "Drop support for --gce-service-account, require activated creds"
Reverts kubernetes/kubernetes#28802
This appears to break the soak tests with "invalid grant" errors -- see the recent batch of errors in #27920.
Automatic merge from submit-queue
Allow for overriding throughput in load test
We seem to be already supporting higher throughput that what the default is.
I'm going to increase the throughput in our tests:
- speed up scalability tests
- ensure that what I'm seeing locally is really the repeatable case
This PR is a short preparation for those experiments.
[Ideally, I would like to have kubemark-500 to be finishing within 30 minutes. And I think this should be doable pretty soon.]
@gmarek
Automatic merge from submit-queue
Change some node e2e test to use the prepull image framework.
Fix https://github.com/kubernetes/kubernetes/issues/28868.
Node e2e test framework pre-pulls all images in [image_list.go](bc2f223f5a/test/e2e_node/image_list.go)
All node e2e test should use image from the "image_list". If a test needs new image, we should update the image_list to include the new image.
/cc @kubernetes/sig-node to notice people to use `image_list` when adding test. :)
Automatic merge from submit-queue
add tokenreviews endpoint to implement webhook
Wires up an API resource under `apis/authentication.k8s.io/v1beta1` to expose the webhook token authentication API as an API resource. This allows one API server to use another for authentication and uses existing policy engines for the "authoritative" API server to controller access to the endpoint.
@cjcullen you wrote the initial type
Automatic merge from submit-queue
Start namespace controller in node e2e
Fix https://github.com/kubernetes/kubernetes/issues/28320.
Based on https://github.com/kubernetes/kubernetes/pull/28807, only the last 2 commits are new.
Before this PR, there was no namespace controller running in node e2e test infrastructure. We can not enable the [`delete-namespace`](f2ddd60eb9/test/e2e/framework/test_context.go (L109)) flag in the test framework.
So after the test running, there will be running pod left on the test node. This seems to be acceptable in our test infrastructure because we create an new instance each time.
However, in 1.4 we may want to provide part of the test as node conformance test to the user, they definitely don't want the test to leave tons of pods on their node after test running.
Currently, there is no easy way to only start namespace controller in kube-controller-manager (confirmed with @mikedanese), so in this PR I started a "uncontainerized" one in the test infrastructure.
This PR:
* Started the namespace controller in the node e2e test infrastructure and enable the automatic namespace deletion.
* Change the privileged test to use framework (@yujuhong), so that all node e2e tests are using the framework and test pods will be cleaned up by namespace controller.
/cc @kubernetes/sig-node