This allows resource usage monitoring test to launch 100 test pods per node, in
addition to the add-on pods.
Also reduce the test time length since the results over the shorter period are
representative enough.
- remove skip list from conformance-test.sh and filter by the new tag
- remove experimental api tests from conformance test suite
- remove all tests from conformance test suite which are either
restricted to e.g. gce, gke, aws or require SSH
This scopes down the initially ambitious PR:
https://github.com/kubernetes/kubernetes/pull/14960 to replace just
`pause` and `fluentd-elasticsearch` to come through `beta.gcr.io`.
The v2 versions have been pushed under new tags, `pause:2.0` and
`fluentd-elastisearch:1.12`.
NOTE: `beta.gcr.io` will still serve images using v1 until they are repushed with v2. Pulls through `gcr.io` will still work after pushing through `beta.gcr.io`, but will be served over v1 (via compat logic).
Some tests in this test suite expects --max-pods (i.e. the maximum pod capacity
on kubelet) to be greater than default, which applies only to the GCE test
environment. Split the tests into two sets so that we can better categorize
the tests in the jenkins setup, without making the test itself aware of the
environment.
This test tracks kubelet resource usage over a long period of time (1hr)
when running N pods (e.g., N=0,50), and prints out the resource usage. This
would give us an idea how much kubelet's management overhead is in a stable
cluster.
Some followup items:
* Use a more realistic workload (e.g., including probing)
* Fail the test if the resource usage is too high.
Caveat:
* We assume the scheduler would do a decent job distributing the pause pods,
but we should double check.
* Cluster addon pods could be unevenly distributed and skews the resource
usage on nodes.