DeferredDiscoveryRESTMapper won't automatically `Reset` itself before its
initial use, since actually trying to construct the delegate will error
out before it gets a chance to `Reset` itself. Ergo, we have to
manually call `Reset` before use.
note that we don't change the behaviour of kubectl.
For example it won't scale new resources. That's the end goal.
The first step is to retrofit existing code to use the generic scaler.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add performance test phase timing export.
**What this PR does / why we need it**:
First step totwards allowing us to get a quick overview of test length
via perf-dash.k8s.io.
**Release note**:
```release-note
NONE
```
@kubernetes/sig-scalability-feature-requests
Automatic merge from submit-queue (batch tested with PRs 50893, 50913, 50963, 50629, 50640)
Increase latency threshold for list api calls
This is only a short-term solution to make our density test green. In the long-term, we should measure as per our new SLIs.
From @wojtek-t's [doc](https://docs.google.com/document/d/1Q5qxdeBPgTTIXZxdsFILg7kgqWhvOwY8uROEf0j5YBw) on the new SLIs/SLOs, we have the following SLO for list calls:
```
SLO1: In default Kubernetes installation, 99th percentile of SLI2 per cluster-day:
<= 1s if total number of objects of the same type as resource in the system <= X
<= 5s if total number of objects of the same type as resource in the system <= Y
<= 30s if total number of objects of the same types as resource in the system <= Z
```
I would guess that 170,000 pods would fall into the 2nd bracket (at least) and hence the new value of 5s. WDYT?
cc @kubernetes/sig-scalability-misc @wojtek-t @gmarek
Automatic merge from submit-queue (batch tested with PRs 48991, 48908)
Group every two services into one in load test
Ref https://github.com/kubernetes/kubernetes/issues/48938
Following from discussion with @bowei and @freehan .
This reduces #services to 8200 while keeping no. of backends same.
/cc @wojtek-t @gmarek