Commit Graph

17 Commits (ce4afa8418ef675201d5957ed93fe1590f01f824)

Author SHA1 Message Date
Shyam Jeedigunta 419bbd26fc Retry if possible while creating latency pods in density test 2017-09-19 17:40:57 +02:00
Kubernetes Submit Queue 3c8fb4b90f Merge pull request #52426 from shyamjvs/dont-crash-on-missing-data
Automatic merge from submit-queue

Don't crash density test on missing a single measurement

We failed our last run due to this (https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-performance/33) and didn't have pod-startup latency recorded at all.
2017-09-14 05:09:46 -07:00
Shyam Jeedigunta fad26a71c8 Make CPU constraint for l7-lb-controller in density test scale with #nodes 2017-09-13 18:21:35 +02:00
Shyam Jeedigunta 4f3e3c6278 Don't crash density test on missing a single measurement 2017-09-13 16:11:53 +02:00
Shyam Jeedigunta 240a1ae5ab Make threshold for glbc mem-usage scale with nodes in density test 2017-08-28 13:24:24 +02:00
Kubernetes Submit Queue fdf14b8218 Merge pull request #50913 from shyamjvs/list-call-slo
Automatic merge from submit-queue (batch tested with PRs 50893, 50913, 50963, 50629, 50640)

Increase latency threshold for list api calls

This is only a short-term solution to make our density test green. In the long-term, we should measure as per our new SLIs.
From @wojtek-t's [doc](https://docs.google.com/document/d/1Q5qxdeBPgTTIXZxdsFILg7kgqWhvOwY8uROEf0j5YBw) on the new SLIs/SLOs, we have the following SLO for list calls:

```
SLO1: In default Kubernetes installation, 99th percentile of SLI2 per cluster-day:
<= 1s if total number of objects of the same type as resource in the system <= X
<= 5s if total number of objects of the same type as resource in the system <= Y
<= 30s if total number of objects of the same types as resource in the system <= Z
```

I would guess that 170,000 pods would fall into the 2nd bracket (at least) and hence the new value of 5s. WDYT?

cc @kubernetes/sig-scalability-misc @wojtek-t @gmarek
2017-08-22 05:31:07 -07:00
Shyam Jeedigunta 70123e71bb Increase latency threshold for list api calls 2017-08-19 00:55:35 +02:00
gmarek 0504cfbc25 Make metav1.(Micro)?Time functions take pointers 2017-08-17 11:24:28 +02:00
chenxingyu 4e069bd90e fix panic in e2e 2017-08-16 15:11:57 +08:00
Jeff Grafton a7f49c906d Use buildozer to delete licenses() rules except under third_party/ 2017-08-11 09:32:39 -07:00
Jeff Grafton 33276f06be Use buildozer to remove deprecated automanaged tags 2017-08-11 09:31:50 -07:00
xiangpengzhao 4edae0aa91 Add [sig-scalability] prefix to scalability e2e tests 2017-08-02 11:44:20 +08:00
Jacob Simpson 29c1b81d4c Scripted migration from clientset_generated to client-go. 2017-07-17 15:05:37 -07:00
Kubernetes Submit Queue 5c32b7d1eb Merge pull request #48908 from shyamjvs/reduce-services-loadtest
Automatic merge from submit-queue (batch tested with PRs 48991, 48908)

Group every two services into one in load test

Ref https://github.com/kubernetes/kubernetes/issues/48938

Following from discussion with @bowei and @freehan .
This reduces #services to 8200 while keeping no. of backends same.

/cc @wojtek-t @gmarek
2017-07-17 07:02:03 -07:00
Shyam Jeedigunta 26006af4e0 Group every two services into one in load test 2017-07-17 14:19:30 +02:00
gmarek 639718dfc5 Remove max-pods density test 2017-07-14 14:32:29 +02:00
Shyam Jeedigunta 0c75dd22f8 Move performance tests to test/e2e/scalability subdirectory 2017-07-12 12:08:26 +02:00