k3s/test/integration/scheduler_perf
xichengliudui 0b184d35a1 fixgo lint failures test/integration/...
update pull request

update pull request

add files

update pull request
2019-02-19 04:41:50 -05:00
..
BUILD fixgo lint failures test/integration/... 2019-02-19 04:41:50 -05:00
OWNERS Updated OWNERS files to include link to docs 2019-02-04 22:33:12 +01:00
README.md generate files before scheduler perf 2017-08-22 16:40:16 +08:00
main_test.go
scheduler_bench_test.go move pkg/kubelet/apis/well_known_labels.go to staging/src/k8s.io/api/core/v1/ 2019-02-05 13:39:07 -05:00
scheduler_perf_types.go # This is a combination of 2 commits. 2017-07-19 00:28:40 -04:00
scheduler_test.go fixgo lint failures test/integration/... 2019-02-19 04:41:50 -05:00
test-performance.sh fix shellcheck in test/integration/... and test/kubemark/... 2019-02-18 01:16:39 -05:00
util.go fixgo lint failures test/integration/... 2019-02-19 04:41:50 -05:00

README.md

Scheduler Performance Test

Motivation

We already have a performance testing system -- Kubemark. However, Kubemark requires setting up and bootstrapping a whole cluster, which takes a lot of time.

We want to have a standard way to reproduce scheduling latency metrics result and benchmark scheduler as simple and fast as possible. We have the following goals:

  • Save time on testing
    • The test and benchmark can be run in a single box. We only set up components necessary to scheduling without booting up a cluster.
  • Profiling runtime metrics to find out bottleneck
    • Write scheduler integration test but focus on performance measurement. Take advantage of go profiling tools and collect fine-grained metrics, like cpu-profiling, memory-profiling and block-profiling.
  • Reproduce test result easily
    • We want to have a known place to do the performance related test for scheduler. Developers should just run one script to collect all the information they need.

Currently the test suite has the following:

  • density test (by adding a new Go test)
    • schedule 30k pods on 1000 (fake) nodes and 3k pods on 100 (fake) nodes
    • print out scheduling rate every second
    • let you learn the rate changes vs number of scheduled pods
  • benchmark
    • make use of go test -bench and report nanosecond/op.
    • schedule b.N pods when the cluster has N nodes and P scheduled pods. Since it takes relatively long time to finish one round, b.N is small: 10 - 100.

How To Run

# In Kubernetes root path
make generated_files

cd test/integration/scheduler_perf
./test-performance.sh

Analytics

Analytics