Automatic merge from submit-queue
e2e: Enable persistent volume test
The test is already there and all packages should be already available on all test machines.
It tests:
- binding
- using bound claim in a pod
- recycling NFS volume
(we should see shortly if all nfs packages are really installed as Jenkins tests it...)
Automatic merge from submit-queue
e2e/framework/util.StartPods: don't wait for pods that are not created
When running ``[k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]`` pods can be created in a way in which additional pods have to be create to fully saturate node's capacity CPU in a cluster. Additional pods are created by calling ``framework.StartPods``. The function creates pods with a given label and waits for them (if ``waitForRunning`` is ``true``). This is fine as long as the number of pods to created is non-zero. If there are zero pods to be created and ``waitForRunning`` is ``true``, the function waits forever as there is not going to be any pods with requested label. Thus, resulting in ``Error waiting for 0 pods to be running - probably a timeout``. Causing the e2e test to fail even if it should not.
Adding condition to return from the function immediately if there is not pod to create.
Automatic merge from submit-queue
Petset controller
Took longer than I expected. Main parts of this pr are:
1. Identity generation based on petset spec (volumes are mapped per discussion in #18016)
2. Ensure that we create/delete pets in sequence
3. Ensuring that we create, wait for healthy, create; or delete, wait for terminationGrace, delete
4. Controller that watches apiserver and drives actual -> desired
PVCs are not deleted, yet.
Automatic merge from submit-queue
kubectl rolling-update support for same image
Fixes#23497.
Enables `kubectl rolling-update --image` to the same image, adding a `--image-pull-policy` flag to remove ambiguity. This allows rolling-update to behave as an "update and/or restart" (https://github.com/kubernetes/kubernetes/issues/23497#issuecomment-212349730), or as a forced update when the same tag can mean multiple versions (e.g. `:latest`). cc @janetkuo @nikhiljindal
Automatic merge from submit-queue
Use tagged redis image for kubectl test, move json test file out of deprecated examples
Closes#24642
Changes the redis image to use the :e2e tagged version on gcr.io.
Since the examples/ subdir is deprecated in favor of the new kubernetes/kubernetes.github.io I just copied this file to test-manifests/kubectl like some other files.
The test is already there and all packages should be already available on all
test machines (namely: mount.nfs).
It tests:
- binding
- using bound claim in a pod
- recycling NFS volume
Automatic merge from submit-queue
Framework support for node e2e.
This should let us port existing e2e tests to the node e2e suite, if the tests are node specific.
Automatic merge from submit-queue
Promote Pod Hostname & Subdomain to fields (were annotations)
Deprecating the podHostName, subdomain and PodHostnames annotations and created corresponding new fields for them on PodSpec and Endpoints types.
Annotation doc: #22564
Annotation code: #20688
Automatic merge from submit-queue
Add support for running clusters on GCI
Google Container-VM Image (GCI) is the next revision of Container-VM. See documentation at https://cloud.google.com/compute/docs/containers/vm-image/. This change adds support for starting a Kubernetes cluster using GCI.
With this change, users can start a kubernetes cluster using the latest kubelet and kubectl release binary built in the GCI image by running:
$ KUBE_OS_DISTRIBUTION="gci" cluster/kube-up.sh
Or run a testing cluster on GCI by running:
$ KUBE_OS_DISTRIBUTION="gci" go run hack/e2e.go -v --up
The commands above will choose the latest GCI image by default.
Automatic merge from submit-queue
Quota ignores pod compute resources on updates
Scenario:
1. define a quota Q that tracks memory and cpu
2. create pod P that uses memory=100Mi, cpu=100m
3. update pod P to use memory=50Mi,cpu=10m
Expected Results:
Step 3 should fail with validation error.
Quota Q should not have changed.
Actual Results:
Step 3 fails validation, but quota Q is decremented to have memory usage down 50Mi and cpu usage down 40m. This is because the quota was getting updated even though the pod was going to fail validation.
Fix:
Quota should only support modifying pod compute resources when pods themselves support modifying their compute resources.
This also fixes https://github.com/kubernetes/kubernetes/issues/24352
/cc @smarterclayton - this is what we discussed.
fyi: @kubernetes/rh-cluster-infra
generic pod-per-node functionality for testing - 2 node test only
- update framework to decompose pod vs svc creation for composition.
- remove hard coded 2 and pointer to --scale
Automatic merge from submit-queue
Move internal types of job from pkg/apis/extensions to pkg/apis/batch
This addressed the job part of #23216, this is still WIP. Will notify once finished. I'd like to have it in before starting working on ScheduledJob.
@lavalamp @erictune fyi
Automatic merge from submit-queue
Use mCPU as CPU usage unit, add version in PerfData, and fix memory usage bug.
Partially addressed #24436.
This PR:
1) Change the CPU usage unit to "mCPU"
2) Add version in PerfData, and perfdash will only support the newest version now.
3) Fix stupid mistake when calculating the memory usage average.
/cc @vishh
Automatic merge from submit-queue
Add timeout to e2e network connectivity checks
Some e2e tests use wget to check connectivity, and the default e2e
timeout is 900s. This change allows the timeout to be specified on a
check-by-check basis. This will also make the check useful for negative
checks (like those used by openshift to validate isolation) since a
short timeout is suggested where connectivity is not expected.
Automatic merge from submit-queue
Increase provisioning test timeouts.
We've encountered flakes in our e2e infrastructure when kubelet took more than one minute to detach a volume used by a deleted pod.
Let's increase the wait period from 1 to 3 minutes. This slows down the test by 2 minutes, but it makes the test more stable.
In addition, when kubelet cannot detach a volume for 3 minutes, let the test wait for additional recycle controller retry interval (10 minutes) and hope the volume is deleted by then. This should not increase usual test time, it makes the test stable when kubelet is _extremely_ slow when releasing the volume.
Fixes: #24161
Automatic merge from submit-queue
Fix unintended change of Service.spec.ports[].nodePort during kubectl apply
Please refer #23551 for more detail. @bgrant0607 I think this simple fix should be ok to reuse nodePort. @thockin ptal.
Release note: Fix unintended change of `Service.spec.ports[].nodePort` during `kubectl apply`.
Automatic merge from submit-queue
Cluster Verification Framework
I've spent the last few days looking at the general patterns of verification we have that we tend to reuse in the e2es. Basically, we need
- label filters
- forEach and WaitFor (where forEach doesn't necessarily waitFor anything).
- timeouts
- multiple phases (reusable definition of state)
- an extensible way to define cluster state that can evolve over time in a data object rather than as a set of parameters that have magic semantics
This PR
- implements the abstract above functionality declaratively, and w/o hidden semantics.
- addresses the sprawling duplicate methods in #23540, so that we can phase out the wrapper methods and replace them with well defined, extensible semantics for cluster state.
- fixes the recently discovered #23730 issue (where kubectl.go is relying on examples.go, which is obviously wacky) by using the new framework to implement forEachPod in just a couple of lines and migrating the wrapper function into framework.go.
There is some cleanup to do here, but this is seemingly working for a couple of use cases that are important (spark,cassandra,...,kubectl) tests. - i played with a few different ideas and this wound up seeming to be the most natural implementation from a usability standpoint...
in any case, just thought id push this up as a first iteration, open to feedback.
@kubernetes/sig-testing @timothysc
Automatic merge from submit-queue
Fix DNS test for larger clusters
On GKE, we scale the number of DNS pods based on the cluster size. For
testing on larger clusters, relax the DNS pod check.
- rebase: ForEach only on Running pods
- add waitFor step in guestbook describe and wrapper
- simplify logs in polling, make panic immediate, give rolluped stats in
the logs.
Improve logging for failure on ForEach
We've encountered flakes in our e2e infrastructure when kubelet took more than
one minute to detach a volume used by a deleted pod.
Let's increase the wait period from 1 to 3 minutes. This slows down the test
by 2 minutes, but it makes the test more stable.
In addition, when kubelet cannot detach a volume for 3 minutes, let the test
wait for additional recycle controller retry interval (10 minutes) and hope the
volume is deleted by then. This should not increase usual test time, it makes
the test stable when kubelet is _extremely_ slow when releasing the volume.
Automatic merge from submit-queue
Move /resetMetrics to DELETE /metrics
Reduces the surface area of the API server slightly and allows
downstream components to have deleteable metrics. After this change
genericapiserver will *not* have metrics unless the caller defines it
(allows different apiserver implementations to make that choice on their
own).
@wojtek-t
Automatic merge from submit-queue
Add watch.Until, a conditional watch mechanism
A more powerful tool than wait.Poll, allows a watch interface to drive conditionals to react to changes on a resource or resources. Provide a set of standard conditions that are in common use in the code, and updates e2e to use a few of these.
Extracted from #23567
Automatic merge from submit-queue
phase 2 of cassandra example overhaul
Here's the next iteration in overhauling this example, towards https://github.com/kubernetes/kubernetes/issues/20961. This removes the pod adoption part, but doesn't (yet) otherwise change any of the resources used.
It also includes some README cleanup, and removes some explicit specification of labels in the rc yaml.
This PR doesn't yet add any commentary on how we're using the seed provider (re: https://github.com/kubernetes/kubernetes/issues/20961#issuecomment-190405959 etc.). Maybe we should add that.
Also: LMK if this PR should include any changes to the links out to the docs.
cc @bgrant0607 @johndmulhausen
in e2e/volumes.go: give time to allow pod cleanup and volume unmount happen before volume server exit;
skip cinder volume test if not running with openstack provider
comment on why pause before containerized server is stopped in volume e2e tests, fix#24100
updates NFS server image to 0.6, per #22529
fix persistent_volume e2e test: test cleanup doesn't expect client pod; delete PV after test
Signed-off-by: Huamin Chen <hchen@redhat.com>
Reduces the surface area of the API server slightly and allows
downstream components to have deleteable metrics. After this change
genericapiserver will *not* have metrics unless the caller defines it
(allows different apiserver implementations to make that choice on their
own).
Some e2e tests use wget to check connectivity, and the default e2e
timeout is 900s. This change allows the timeout to be specified on a
check-by-check basis. This will also make the check useful for negative
checks (like those used by openshift to validate isolation) since a
short timeout is suggested where connectivity is not expected.
Automatic merge from submit-queue
Add generalized performance data type in e2e test
For kubernetes/contrib/issues/564 and #15554.
This PR added two files in e2e test:
1) `perftype/perftype.go`: This file contains generalized performance data type. The type can be pretty printed in Json format and analyzed by other performance analyzing tools, such as [Perfdash](https://github.com/kubernetes/contrib/tree/master/perfdash).
2) `perf_util.go`: This file contains functions which convert e2e performance test result into new performance data type.
The new performance data type is now used in *Density test, Load test and Kubelet resource tracking*. It's easy to support other e2e performance test by adding new convert function in `perf_util.go`.
@gmarek @yujuhong
/cc @kubernetes/sig-testing
Automatic merge from submit-queue
Clientset release 1.3
This PR creates the release 1.3 client set. We'll keep updating this client set until we cut release 1.3. In the meantime, the release 1.2 client set will be locked.
@lavalamp
This commit switch most functions in kubelet_stats.go to use the new API.
However, the functions that perform one-time resource usage retrieval remain
unchanged to be compatible with reource_usage_gatherer.go. They should be
handled separately.
Also, the new summary API does not provide the RSS memory yet, so all memory
checking tests will *always* pass. We plan to add this metrics in the API and
restore the functionality of the test.
Automatic merge from submit-queue
Additional go vet fixes
Mostly:
- pass lock by value
- bad syntax for struct tag value
- example functions not formatted properly
Automatic merge from submit-queue
Ensure object returned by volume getCloudProvider incorporates cloud config
This PR addresses https://github.com/kubernetes/kubernetes/issues/23517.
**Problem**
The existing GCE PD and AWS EBS volume plugin code were fetching cloud provider without specifying a cloud config: `cloudprovider.GetCloudProvider("gce", nil)`
This caused the cloud provider to use default auth mechanism, which is not acceptable for the provisioning controller running on GKE master.
**Fix**
This PR does the following:
* Modifies the GCE PD and AWS EBS volume plugin code to use the cloud provider object pre-constructed by the binary with a cloud config.
* Enable provisioning E2E test for GKE (to catch future issues).
Thanks to @cjcullen for debugging and finding the root cause! 👍
This should be cherry-picked into the v1.2 branch for the next release.
Automatic merge from submit-queue
Update port forward e2e for go 1.6
Only close the stdout/stderr pipes from kubectl port-forward when we're truly done with the command,
instead of as soon as runPortForward exits.
Also try to gracefully stop kubectl port-forward via SIGINT, instead of always sending SIGKILL, as
this will help avoid spdy goroutine leaks in the Kubelet.
Ref #22149
cc @smarterclayton @kubernetes/rh-cluster-infra
Only close the stdout/stderr pipes from kubectl port-forward when we're truly done with the command,
instead of as soon as runPortForward exits.
Also try to gracefully stop kubectl port-forward via SIGINT, instead of always sending SIGKILL, as
this will help avoid spdy goroutine leaks in the Kubelet.