There have been a couple of recent bugs in the "normalizing" part of the
`reconcileAutoscaler` method. This part of the code base is responsible
for, among other things, taking the suggested desired replicas based on
the metrics, ensuring it conforms to certain conditions, and updating it
if it does not. Isolate the part that converts the desired replicas
based on a given set of rules into its own function.
We are refactoring this part of the code base to make the logic simpler
and to make it easier to write unit tests.
Automatic merge from submit-queue (batch tested with PRs 53743, 53564). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Polymorphic Scale Client
This PR introduces a polymorphic scale client based on discovery information that's able to scale scalable resources in arbitrary group-versions, as long as they present the scale subresource in their discovery information.
Currently, it supports `extensions/v1beta1.Scale` and `autoscaling/v1.Scale`, but supporting other versions of scale if/when we produce them should be fairly trivial.
It also updates the HPA to use this client, meaning the HPA will now work on any scalable resource, not just things in the `extensions/v1beta1` API group.
**Release note**:
```release-note
Introduces a polymorphic scale client, allowing HorizontalPodAutoscalers to properly function on scalable resources in any API group.
```
Unblocks #29698
Unblocks #38756
Unblocks #49504Fixes#38810
This updates the HPA controller to use the polymorphic scale client from
client-go. This should enable HPAs to work with arbitrary scalable
resources, instead of just those in the extensions API group (meaning we
can deprecate the copy of ReplicationController in extensions/v1beta1).
It also means that the HPA controller now pays attention to the
APIVersion field in `scaleTargetRef` (more specifically, the group part
of it).
Note that currently, discovery information on which resources are
available where is only fetched once (the first time that it's
requested). In the future, we may want a refreshing discovery REST
mapper.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Make HPA tolerance a flag
**What this PR does / why we need it**:
Make HPA tolerance configurable as a flag. This change allows us to use
different tolerance values in production/testing.
**Which issue this PR fixes**:
Fixes#18155
**Release note:**
```release-note
Control HPA tolerance through the `horizontal-pod-autoscaler-tolerance` flag.
```
Signed-off-by: mattjmcnaughton <mattjmcnaughton@gmail.com>
Fix#53670
Fix a bug where `desiredReplicas` could be greater than `maxReplicas`
if the original value for `desiredReplicas > scaleUpLimit` and
`scaleUpLimit > maxReplicas`. Previously, when that happened, we would
scale up to `scaleUpLimit`, and then in the next auto-scaling run, scale
down to `maxReplicas`. Address this issue and introduce a regression
test.
Previously
`pkg.controller.podautoscaler.UnsafeConvertToVersion` was
exported. However, it was never used outside of the `podautoscaler`
package. Make it private to prevent confusion.
Additionally, move the two private functions in `horizontal.go` to be
with the other private functions at the bottom of the file - imho its
more readable than having them directly at the top of the file, before
the public type and function definitions.
Fix#18155
Make HPA tolerance configurable as a flag. This change allows us to use
different tolerance values in production/testing.
Signed-off-by: mattjmcnaughton <mattjmcnaughton@gmail.com>
Resolve the remaining `golint` errors in the
`pkg/controller/podautoscaler` package by improving the documentation
in `horizontal.go`. Now all public methods have comments.
Fix#49256
When `ScalingLimited = true` for the `hpa`, there is an accompanying
status message, describing why scaling is limited. Previously if the desired
replica count was 0, and spec.minReplicas > 0, the status message
indicated "the desired replica count was less than the min replica
count". This was particularly confusing when `spec.MinReplicas = 1`. If
there was no `spec.minReplicas`, then the status message indicated "the
desired replica count was zero" which is more informative.
Update the calculation of status message so that if the desired replica
count is 0, we always display the clearer "the desired replica count was
zero" status message, even if spec.minReplicas > 0.
Signed-off-by: mattjmcnaughton <mattjmcnaughton@gmail.com>
This commit only sends updates if the status has actually changed.
Since the HPA runs at a regular interval, this should reduce the volume
of writes, especially on short HPA intervals with relatively constant
metrics.
This commit causes the HPA controller to set a variety of status
conditions using the new `Status.Conditions` field of
autoscaling/v2alpha1. These provide insight into the current state
of the HPA, and generally correspond to similar events being emitted.
Automatic merge from submit-queue (batch tested with PRs 44862, 42241, 42101, 43181, 44147)
Feature/hpa upscale downscale delay configurable
**What this PR does / why we need it**:
Makes "upscale forbidden window" and "downscale forbidden window" duration configurable in arguments of kube-controller-manager. Those are options of horizontal pod autoscaler.
**Special notes for your reviewer**:
Please have a look @DirectXMan12 , the PR as discussed in Slack.
**Release note**:
```
Make "upscale forbidden window" and "downscale forbidden window" duration configurable in arguments of kube-controller-manager. Those are options of horizontal pod autoscaler. Right now are hardcoded 3 minutes for upscale, and 5 minutes to downscale. But sometimes cluster administrator might want to change this for his own needs.
```
Automatic merge from submit-queue
small code improvements and fix some typos
Signed-off-by: bruceauyeung <ouyang.qinhua@zte.com.cn>
**What this PR does / why we need it**:
1. eliminate unnecessary extra scale up limit calculation
2. eliminate unnecessary extra pods length computation
3. fix some typos
Since the HPA controller pulls information from an external source that
makes no guarantees about consistency, it's possible for the HPA
to get into an infinite update loop -- if the metrics change with
every query, the HPA controller will run it's normal reconcilation,
post a status update, see that status update itself, fetch new metrics,
and if those metrics are different, post another status update, and
repeat. This can lead to continuously updating a single HPA.
By rate-limiting each HPA to once per sync interval, we prevent this
from happening.
Conversions can mutate the underlying object (and ours were).
Make a deepcopy before our first conversion at the very start
of the reconciler method in order to avoid mutating the shared
informer cache during conversion.
Fixes#41768
This commit converts the HPA controller over to using the new version of
the HorizontalPodAutoscaler object found in autoscaling/v2alpha1. Note
that while the autoscaler will accept requests for object metrics, the
scale client will return an error on attempts to get object metrics
(since that requires the new custom metrics API, which is not yet
implemented).
This also enables the HPA object in v2alpha1 as a retrievable API
version by default.
Automatic merge from submit-queue
rescale immediately if the basic constraints are not satisfied
refactor reconcileAutoscaler.
If the basic constraints are not satisfied, we should rescale the target ref immediately.
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.