HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
/ *
Copyright 2016 The Kubernetes Authors .
Licensed under the Apache License , Version 2.0 ( the "License" ) ;
you may not use this file except in compliance with the License .
You may obtain a copy of the License at
http : //www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing , software
distributed under the License is distributed on an "AS IS" BASIS ,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND , either express or implied .
See the License for the specific language governing permissions and
limitations under the License .
* /
package podautoscaler
import (
"fmt"
"math"
"testing"
"time"
2017-08-15 19:01:19 +00:00
autoscalingv2 "k8s.io/api/autoscaling/v2beta1"
2017-06-22 18:24:23 +00:00
"k8s.io/api/core/v1"
2017-01-25 13:13:07 +00:00
"k8s.io/apimachinery/pkg/api/resource"
2017-01-11 14:09:48 +00:00
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
2017-02-20 06:17:16 +00:00
"k8s.io/apimachinery/pkg/runtime/schema"
2017-06-23 20:56:37 +00:00
"k8s.io/client-go/kubernetes/fake"
2017-01-25 20:07:10 +00:00
core "k8s.io/client-go/testing"
2017-10-16 11:41:50 +00:00
"k8s.io/kubernetes/pkg/api/legacyscheme"
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
"k8s.io/kubernetes/pkg/controller/podautoscaler/metrics"
2017-08-17 18:50:51 +00:00
cmapi "k8s.io/metrics/pkg/apis/custom_metrics/v1beta1"
2018-02-21 10:19:51 +00:00
emapi "k8s.io/metrics/pkg/apis/external_metrics/v1beta1"
2017-08-30 18:53:13 +00:00
metricsapi "k8s.io/metrics/pkg/apis/metrics/v1beta1"
2017-10-16 11:41:50 +00:00
metricsfake "k8s.io/metrics/pkg/client/clientset_generated/clientset/fake"
cmfake "k8s.io/metrics/pkg/client/custom_metrics/fake"
2018-02-21 18:05:26 +00:00
emfake "k8s.io/metrics/pkg/client/external_metrics/fake"
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
type resourceInfo struct {
2016-11-18 20:50:17 +00:00
name v1 . ResourceName
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
requests [ ] resource . Quantity
levels [ ] int64
2017-01-10 22:26:13 +00:00
// only applies to pod names returned from "heapster"
podNames [ ] string
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
targetUtilization int32
expectedUtilization int32
2016-12-02 20:18:26 +00:00
expectedValue int64
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
}
type metricInfo struct {
2017-02-20 06:17:16 +00:00
name string
levels [ ] int64
singleObject * autoscalingv2 . CrossVersionObjectReference
2018-02-21 10:19:51 +00:00
selector * metav1 . LabelSelector
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
2018-02-21 10:19:51 +00:00
targetUtilization int64
perPodTargetUtilization int64
expectedUtilization int64
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
}
type replicaCalcTestCase struct {
currentReplicas int32
expectedReplicas int32
expectedError error
timestamp time . Time
resource * resourceInfo
metric * metricInfo
2016-11-18 20:50:17 +00:00
podReadiness [ ] v1 . ConditionStatus
2018-02-20 18:14:43 +00:00
podPhase [ ] v1 . PodPhase
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
}
const (
2016-12-02 20:18:26 +00:00
testNamespace = "test-namespace"
podNamePrefix = "test-pod"
numContainersPerPod = 2
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
)
2018-02-21 18:05:26 +00:00
func ( tc * replicaCalcTestCase ) prepareTestClient ( t * testing . T ) ( * fake . Clientset , * metricsfake . Clientset , * cmfake . FakeCustomMetricsClient , * emfake . FakeExternalMetricsClient ) {
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
fakeClient := & fake . Clientset { }
fakeClient . AddReactor ( "list" , "pods" , func ( action core . Action ) ( handled bool , ret runtime . Object , err error ) {
2016-11-18 20:50:17 +00:00
obj := & v1 . PodList { }
2018-02-20 18:14:43 +00:00
podsCount := int ( tc . currentReplicas )
// Failed pods are not included in tc.currentReplicas
if tc . podPhase != nil && len ( tc . podPhase ) > podsCount {
podsCount = len ( tc . podPhase )
}
for i := 0 ; i < podsCount ; i ++ {
2016-11-18 20:50:17 +00:00
podReadiness := v1 . ConditionTrue
2018-02-20 18:14:43 +00:00
if tc . podReadiness != nil && i < len ( tc . podReadiness ) {
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
podReadiness = tc . podReadiness [ i ]
}
2018-02-20 18:14:43 +00:00
podPhase := v1 . PodRunning
if tc . podPhase != nil {
podPhase = tc . podPhase [ i ]
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
podName := fmt . Sprintf ( "%s-%d" , podNamePrefix , i )
2016-11-18 20:50:17 +00:00
pod := v1 . Pod {
Status : v1 . PodStatus {
2018-02-20 18:14:43 +00:00
Phase : podPhase ,
2016-11-18 20:50:17 +00:00
Conditions : [ ] v1 . PodCondition {
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
{
2016-11-18 20:50:17 +00:00
Type : v1 . PodReady ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
Status : podReadiness ,
} ,
} ,
} ,
2017-01-17 03:38:19 +00:00
ObjectMeta : metav1 . ObjectMeta {
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
Name : podName ,
Namespace : testNamespace ,
Labels : map [ string ] string {
"name" : podNamePrefix ,
} ,
} ,
2016-11-18 20:50:17 +00:00
Spec : v1 . PodSpec {
Containers : [ ] v1 . Container { { } , { } } ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
if tc . resource != nil && i < len ( tc . resource . requests ) {
2016-11-18 20:50:17 +00:00
pod . Spec . Containers [ 0 ] . Resources = v1 . ResourceRequirements {
Requests : v1 . ResourceList {
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
tc . resource . name : tc . resource . requests [ i ] ,
} ,
}
2016-11-18 20:50:17 +00:00
pod . Spec . Containers [ 1 ] . Resources = v1 . ResourceRequirements {
Requests : v1 . ResourceList {
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
tc . resource . name : tc . resource . requests [ i ] ,
} ,
}
}
obj . Items = append ( obj . Items , pod )
}
return true , obj , nil
} )
2017-02-20 06:17:16 +00:00
fakeMetricsClient := & metricsfake . Clientset { }
// NB: we have to sound like Gollum due to gengo's inability to handle already-plural resource names
2017-05-03 22:11:22 +00:00
fakeMetricsClient . AddReactor ( "list" , "pods" , func ( action core . Action ) ( handled bool , ret runtime . Object , err error ) {
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
if tc . resource != nil {
2017-02-20 06:17:16 +00:00
metrics := & metricsapi . PodMetricsList { }
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
for i , resValue := range tc . resource . levels {
2017-01-10 22:26:13 +00:00
podName := fmt . Sprintf ( "%s-%d" , podNamePrefix , i )
if len ( tc . resource . podNames ) > i {
podName = tc . resource . podNames [ i ]
}
2017-02-20 06:17:16 +00:00
// NB: the list reactor actually does label selector filtering for us,
// so we have to make sure our results match the label selector
2016-11-30 07:27:27 +00:00
podMetric := metricsapi . PodMetrics {
2017-02-20 06:17:16 +00:00
ObjectMeta : metav1 . ObjectMeta {
2017-01-10 22:26:13 +00:00
Name : podName ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
Namespace : testNamespace ,
2017-02-20 06:17:16 +00:00
Labels : map [ string ] string { "name" : podNamePrefix } ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
2017-02-20 06:17:16 +00:00
Timestamp : metav1 . Time { Time : tc . timestamp } ,
2016-12-02 20:18:26 +00:00
Containers : make ( [ ] metricsapi . ContainerMetrics , numContainersPerPod ) ,
}
for i := 0 ; i < numContainersPerPod ; i ++ {
podMetric . Containers [ i ] = metricsapi . ContainerMetrics {
Name : fmt . Sprintf ( "container%v" , i ) ,
2017-07-15 05:25:54 +00:00
Usage : v1 . ResourceList {
v1 . ResourceName ( tc . resource . name ) : * resource . NewMilliQuantity (
2016-12-02 20:18:26 +00:00
int64 ( resValue ) ,
resource . DecimalSI ) ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
2016-12-02 20:18:26 +00:00
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
}
metrics . Items = append ( metrics . Items , podMetric )
}
2017-02-20 06:17:16 +00:00
return true , metrics , nil
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
2017-02-20 06:17:16 +00:00
return true , nil , fmt . Errorf ( "no pod resource metrics specified in test client" )
} )
fakeCMClient := & cmfake . FakeCustomMetricsClient { }
fakeCMClient . AddReactor ( "get" , "*" , func ( action core . Action ) ( handled bool , ret runtime . Object , err error ) {
getForAction , wasGetFor := action . ( cmfake . GetForAction )
if ! wasGetFor {
return true , nil , fmt . Errorf ( "expected a get-for action, got %v instead" , action )
}
if tc . metric == nil {
return true , nil , fmt . Errorf ( "no custom metrics specified in test client" )
}
assert . Equal ( t , tc . metric . name , getForAction . GetMetricName ( ) , "the metric requested should have matched the one specified" )
if getForAction . GetName ( ) == "*" {
metrics := cmapi . MetricValueList { }
// multiple objects
assert . Equal ( t , "pods" , getForAction . GetResource ( ) . Resource , "the type of object that we requested multiple metrics for should have been pods" )
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
for i , level := range tc . metric . levels {
2017-02-20 06:17:16 +00:00
podMetric := cmapi . MetricValue {
2017-07-15 05:25:54 +00:00
DescribedObject : v1 . ObjectReference {
2017-02-20 06:17:16 +00:00
Kind : "Pod" ,
Name : fmt . Sprintf ( "%s-%d" , podNamePrefix , i ) ,
Namespace : testNamespace ,
} ,
Timestamp : metav1 . Time { Time : tc . timestamp } ,
MetricName : tc . metric . name ,
Value : * resource . NewMilliQuantity ( level , resource . DecimalSI ) ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
}
2017-02-20 06:17:16 +00:00
metrics . Items = append ( metrics . Items , podMetric )
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
2017-02-20 06:17:16 +00:00
return true , & metrics , nil
2017-09-09 18:53:34 +00:00
}
name := getForAction . GetName ( )
2017-10-16 11:41:50 +00:00
mapper := legacyscheme . Registry . RESTMapper ( )
2017-09-09 18:53:34 +00:00
metrics := & cmapi . MetricValueList { }
assert . NotNil ( t , tc . metric . singleObject , "should have only requested a single-object metric when calling GetObjectMetricReplicas" )
gk := schema . FromAPIVersionAndKind ( tc . metric . singleObject . APIVersion , tc . metric . singleObject . Kind ) . GroupKind ( )
mapping , err := mapper . RESTMapping ( gk )
if err != nil {
return true , nil , fmt . Errorf ( "unable to get mapping for %s: %v" , gk . String ( ) , err )
}
groupResource := schema . GroupResource { Group : mapping . GroupVersionKind . Group , Resource : mapping . Resource }
2017-02-20 06:17:16 +00:00
2017-09-09 18:53:34 +00:00
assert . Equal ( t , groupResource . String ( ) , getForAction . GetResource ( ) . Resource , "should have requested metrics for the resource matching the GroupKind passed in" )
assert . Equal ( t , tc . metric . singleObject . Name , name , "should have requested metrics for the object matching the name passed in" )
2017-02-20 06:17:16 +00:00
2017-09-09 18:53:34 +00:00
metrics . Items = [ ] cmapi . MetricValue {
{
DescribedObject : v1 . ObjectReference {
Kind : tc . metric . singleObject . Kind ,
APIVersion : tc . metric . singleObject . APIVersion ,
Name : name ,
2017-02-20 06:17:16 +00:00
} ,
2017-09-09 18:53:34 +00:00
Timestamp : metav1 . Time { Time : tc . timestamp } ,
MetricName : tc . metric . name ,
Value : * resource . NewMilliQuantity ( int64 ( tc . metric . levels [ 0 ] ) , resource . DecimalSI ) ,
} ,
2017-02-20 06:17:16 +00:00
}
2017-09-09 18:53:34 +00:00
return true , metrics , nil
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} )
2018-02-21 18:05:26 +00:00
fakeEMClient := & emfake . FakeExternalMetricsClient { }
2018-02-21 10:19:51 +00:00
fakeEMClient . AddReactor ( "list" , "*" , func ( action core . Action ) ( handled bool , ret runtime . Object , err error ) {
listAction , wasList := action . ( core . ListAction )
if ! wasList {
return true , nil , fmt . Errorf ( "expected a list-for action, got %v instead" , action )
}
if tc . metric == nil {
return true , nil , fmt . Errorf ( "no external metrics specified in test client" )
}
assert . Equal ( t , tc . metric . name , listAction . GetResource ( ) . Resource , "the metric requested should have matched the one specified" )
selector , err := metav1 . LabelSelectorAsSelector ( tc . metric . selector )
if err != nil {
return true , nil , fmt . Errorf ( "failed to convert label selector specified in test client" )
}
assert . Equal ( t , selector , listAction . GetListRestrictions ( ) . Labels , "the metric selector should have matched the one specified" )
metrics := emapi . ExternalMetricValueList { }
for _ , level := range tc . metric . levels {
metric := emapi . ExternalMetricValue {
Timestamp : metav1 . Time { Time : tc . timestamp } ,
MetricName : tc . metric . name ,
Value : * resource . NewMilliQuantity ( level , resource . DecimalSI ) ,
}
metrics . Items = append ( metrics . Items , metric )
}
return true , & metrics , nil
} )
2018-02-21 18:05:26 +00:00
return fakeClient , fakeMetricsClient , fakeCMClient , fakeEMClient
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
}
func ( tc * replicaCalcTestCase ) runTest ( t * testing . T ) {
2018-02-21 18:05:26 +00:00
testClient , testMetricsClient , testCMClient , testEMClient := tc . prepareTestClient ( t )
metricsClient := metrics . NewRESTMetricsClient ( testMetricsClient . MetricsV1beta1 ( ) , testCMClient , testEMClient )
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
replicaCalc := & ReplicaCalculator {
metricsClient : metricsClient ,
podsGetter : testClient . Core ( ) ,
2017-09-11 13:59:53 +00:00
tolerance : defaultTestingTolerance ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
}
2016-12-03 18:57:26 +00:00
selector , err := metav1 . LabelSelectorAsSelector ( & metav1 . LabelSelector {
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
MatchLabels : map [ string ] string { "name" : podNamePrefix } ,
} )
if err != nil {
require . Nil ( t , err , "something went horribly wrong..." )
}
if tc . resource != nil {
2016-12-02 20:18:26 +00:00
outReplicas , outUtilization , outRawValue , outTimestamp , err := replicaCalc . GetResourceReplicas ( tc . currentReplicas , tc . resource . targetUtilization , tc . resource . name , testNamespace , selector )
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
if tc . expectedError != nil {
require . Error ( t , err , "there should be an error calculating the replica count" )
assert . Contains ( t , err . Error ( ) , tc . expectedError . Error ( ) , "the error message should have contained the expected error message" )
return
}
require . NoError ( t , err , "there should not have been an error calculating the replica count" )
assert . Equal ( t , tc . expectedReplicas , outReplicas , "replicas should be as expected" )
assert . Equal ( t , tc . resource . expectedUtilization , outUtilization , "utilization should be as expected" )
2016-12-02 20:18:26 +00:00
assert . Equal ( t , tc . resource . expectedValue , outRawValue , "raw value should be as expected" )
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
assert . True ( t , tc . timestamp . Equal ( outTimestamp ) , "timestamp should be as expected" )
} else {
2017-02-20 06:17:16 +00:00
var outReplicas int32
var outUtilization int64
var outTimestamp time . Time
var err error
if tc . metric . singleObject != nil {
outReplicas , outUtilization , outTimestamp , err = replicaCalc . GetObjectMetricReplicas ( tc . currentReplicas , tc . metric . targetUtilization , tc . metric . name , testNamespace , tc . metric . singleObject )
2018-02-21 10:19:51 +00:00
} else if tc . metric . selector != nil {
if tc . metric . targetUtilization > 0 {
outReplicas , outUtilization , outTimestamp , err = replicaCalc . GetExternalMetricReplicas ( tc . currentReplicas , tc . metric . targetUtilization , tc . metric . name , testNamespace , tc . metric . selector )
} else if tc . metric . perPodTargetUtilization > 0 {
outReplicas , outUtilization , outTimestamp , err = replicaCalc . GetExternalPerPodMetricReplicas ( tc . currentReplicas , tc . metric . perPodTargetUtilization , tc . metric . name , testNamespace , tc . metric . selector )
}
2017-02-20 06:17:16 +00:00
} else {
outReplicas , outUtilization , outTimestamp , err = replicaCalc . GetMetricReplicas ( tc . currentReplicas , tc . metric . targetUtilization , tc . metric . name , testNamespace , selector )
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
if tc . expectedError != nil {
require . Error ( t , err , "there should be an error calculating the replica count" )
assert . Contains ( t , err . Error ( ) , tc . expectedError . Error ( ) , "the error message should have contained the expected error message" )
return
}
require . NoError ( t , err , "there should not have been an error calculating the replica count" )
assert . Equal ( t , tc . expectedReplicas , outReplicas , "replicas should be as expected" )
2016-12-02 20:18:26 +00:00
assert . Equal ( t , tc . metric . expectedUtilization , outUtilization , "utilization should be as expected" )
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
assert . True ( t , tc . timestamp . Equal ( outTimestamp ) , "timestamp should be as expected" )
}
}
2017-01-10 22:26:13 +00:00
func TestReplicaCalcDisjointResourcesMetrics ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 1 ,
expectedError : fmt . Errorf ( "no metrics returned matched known pods" ) ,
resource : & resourceInfo {
name : v1 . ResourceCPU ,
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 100 } ,
podNames : [ ] string { "an-older-pod-name" } ,
targetUtilization : 100 ,
} ,
}
tc . runTest ( t )
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
func TestReplicaCalcScaleUp ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 5 ,
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 300 , 500 , 700 } ,
targetUtilization : 30 ,
expectedUtilization : 50 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 500 ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcScaleUpUnreadyLessScale ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 4 ,
2016-11-18 20:50:17 +00:00
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionFalse , v1 . ConditionTrue , v1 . ConditionTrue } ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 300 , 500 , 700 } ,
targetUtilization : 30 ,
expectedUtilization : 60 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 600 ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcScaleUpUnreadyNoScale ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 3 ,
2016-11-18 20:50:17 +00:00
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionTrue , v1 . ConditionFalse , v1 . ConditionFalse } ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 400 , 500 , 700 } ,
targetUtilization : 30 ,
expectedUtilization : 40 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 400 ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
2018-02-20 18:14:43 +00:00
func TestReplicaCalcScaleUpIgnoresFailedPods ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 2 ,
expectedReplicas : 4 ,
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionTrue , v1 . ConditionTrue , v1 . ConditionFalse , v1 . ConditionFalse } ,
podPhase : [ ] v1 . PodPhase { v1 . PodRunning , v1 . PodRunning , v1 . PodFailed , v1 . PodFailed } ,
resource : & resourceInfo {
name : v1 . ResourceCPU ,
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 500 , 700 } ,
targetUtilization : 30 ,
expectedUtilization : 60 ,
expectedValue : numContainersPerPod * 600 ,
} ,
}
tc . runTest ( t )
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
func TestReplicaCalcScaleUpCM ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 4 ,
metric : & metricInfo {
name : "qps" ,
2017-02-20 06:17:16 +00:00
levels : [ ] int64 { 20000 , 10000 , 30000 } ,
2016-12-02 20:18:26 +00:00
targetUtilization : 15000 ,
expectedUtilization : 20000 ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcScaleUpCMUnreadyLessScale ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 4 ,
2016-11-18 20:50:17 +00:00
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionTrue , v1 . ConditionTrue , v1 . ConditionFalse } ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
metric : & metricInfo {
name : "qps" ,
2017-02-20 06:17:16 +00:00
levels : [ ] int64 { 50000 , 10000 , 30000 } ,
2016-12-02 20:18:26 +00:00
targetUtilization : 15000 ,
expectedUtilization : 30000 ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcScaleUpCMUnreadyNoScaleWouldScaleDown ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 3 ,
2016-11-18 20:50:17 +00:00
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionFalse , v1 . ConditionTrue , v1 . ConditionFalse } ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
metric : & metricInfo {
name : "qps" ,
2017-02-20 06:17:16 +00:00
levels : [ ] int64 { 50000 , 15000 , 30000 } ,
2016-12-02 20:18:26 +00:00
targetUtilization : 15000 ,
expectedUtilization : 15000 ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
2017-02-20 06:17:16 +00:00
func TestReplicaCalcScaleUpCMObject ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 4 ,
metric : & metricInfo {
name : "qps" ,
levels : [ ] int64 { 20000 } ,
targetUtilization : 15000 ,
expectedUtilization : 20000 ,
singleObject : & autoscalingv2 . CrossVersionObjectReference {
Kind : "Deployment" ,
APIVersion : "extensions/v1beta1" ,
Name : "some-deployment" ,
} ,
} ,
}
tc . runTest ( t )
}
2018-02-21 10:19:51 +00:00
func TestReplicaCalcScaleUpCMExternal ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 1 ,
expectedReplicas : 2 ,
metric : & metricInfo {
name : "qps" ,
levels : [ ] int64 { 8600 } ,
targetUtilization : 4400 ,
expectedUtilization : 8600 ,
selector : & metav1 . LabelSelector { MatchLabels : map [ string ] string { "label" : "value" } } ,
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcScaleUpCMExternalNoLabels ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 1 ,
expectedReplicas : 2 ,
metric : & metricInfo {
name : "qps" ,
levels : [ ] int64 { 8600 } ,
targetUtilization : 4400 ,
expectedUtilization : 8600 ,
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcScaleUpPerPodCMExternal ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 4 ,
metric : & metricInfo {
name : "qps" ,
levels : [ ] int64 { 8600 } ,
perPodTargetUtilization : 2150 ,
expectedUtilization : 2867 ,
selector : & metav1 . LabelSelector { MatchLabels : map [ string ] string { "label" : "value" } } ,
} ,
}
tc . runTest ( t )
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
func TestReplicaCalcScaleDown ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 5 ,
expectedReplicas : 3 ,
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 100 , 300 , 500 , 250 , 250 } ,
targetUtilization : 50 ,
expectedUtilization : 28 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 280 ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcScaleDownCM ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 5 ,
expectedReplicas : 3 ,
metric : & metricInfo {
name : "qps" ,
2017-02-20 06:17:16 +00:00
levels : [ ] int64 { 12000 , 12000 , 12000 , 12000 , 12000 } ,
2016-12-02 20:18:26 +00:00
targetUtilization : 20000 ,
expectedUtilization : 12000 ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
2017-02-20 06:17:16 +00:00
func TestReplicaCalcScaleDownCMObject ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 5 ,
expectedReplicas : 3 ,
metric : & metricInfo {
name : "qps" ,
levels : [ ] int64 { 12000 } ,
targetUtilization : 20000 ,
expectedUtilization : 12000 ,
singleObject : & autoscalingv2 . CrossVersionObjectReference {
Kind : "Deployment" ,
APIVersion : "extensions/v1beta1" ,
Name : "some-deployment" ,
} ,
} ,
}
tc . runTest ( t )
}
2018-02-21 10:19:51 +00:00
func TestReplicaCalcScaleDownCMExternal ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 5 ,
expectedReplicas : 3 ,
metric : & metricInfo {
name : "qps" ,
levels : [ ] int64 { 8600 } ,
targetUtilization : 14334 ,
expectedUtilization : 8600 ,
selector : & metav1 . LabelSelector { MatchLabels : map [ string ] string { "label" : "value" } } ,
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcScaleDownPerPodCMExternal ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 5 ,
expectedReplicas : 3 ,
metric : & metricInfo {
name : "qps" ,
levels : [ ] int64 { 8600 } ,
perPodTargetUtilization : 2867 ,
expectedUtilization : 1720 ,
selector : & metav1 . LabelSelector { MatchLabels : map [ string ] string { "label" : "value" } } ,
} ,
}
tc . runTest ( t )
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
func TestReplicaCalcScaleDownIgnoresUnreadyPods ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 5 ,
expectedReplicas : 2 ,
2016-11-18 20:50:17 +00:00
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionTrue , v1 . ConditionTrue , v1 . ConditionTrue , v1 . ConditionFalse , v1 . ConditionFalse } ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 100 , 300 , 500 , 250 , 250 } ,
targetUtilization : 50 ,
expectedUtilization : 30 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 300 ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
2018-02-20 18:14:43 +00:00
func TestReplicaCalcScaleDownIgnoresFailedPods ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 5 ,
expectedReplicas : 3 ,
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionTrue , v1 . ConditionTrue , v1 . ConditionTrue , v1 . ConditionTrue , v1 . ConditionTrue , v1 . ConditionFalse , v1 . ConditionFalse } ,
podPhase : [ ] v1 . PodPhase { v1 . PodRunning , v1 . PodRunning , v1 . PodRunning , v1 . PodRunning , v1 . PodRunning , v1 . PodFailed , v1 . PodFailed } ,
resource : & resourceInfo {
name : v1 . ResourceCPU ,
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 100 , 300 , 500 , 250 , 250 } ,
targetUtilization : 50 ,
expectedUtilization : 28 ,
expectedValue : numContainersPerPod * 280 ,
} ,
}
tc . runTest ( t )
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
func TestReplicaCalcTolerance ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 3 ,
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "0.9" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.1" ) } ,
levels : [ ] int64 { 1010 , 1030 , 1020 } ,
targetUtilization : 100 ,
expectedUtilization : 102 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 1020 ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcToleranceCM ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 3 ,
metric : & metricInfo {
name : "qps" ,
2017-02-20 06:17:16 +00:00
levels : [ ] int64 { 20000 , 21000 , 21000 } ,
targetUtilization : 20000 ,
expectedUtilization : 20666 ,
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcToleranceCMObject ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 3 ,
metric : & metricInfo {
name : "qps" ,
levels : [ ] int64 { 20666 } ,
2016-12-02 20:18:26 +00:00
targetUtilization : 20000 ,
expectedUtilization : 20666 ,
2017-02-20 06:17:16 +00:00
singleObject : & autoscalingv2 . CrossVersionObjectReference {
Kind : "Deployment" ,
APIVersion : "extensions/v1beta1" ,
Name : "some-deployment" ,
} ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
2018-02-21 10:19:51 +00:00
}
tc . runTest ( t )
}
func TestReplicaCalcToleranceCMExternal ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 3 ,
metric : & metricInfo {
name : "qps" ,
levels : [ ] int64 { 8600 } ,
targetUtilization : 8888 ,
expectedUtilization : 8600 ,
selector : & metav1 . LabelSelector { MatchLabels : map [ string ] string { "label" : "value" } } ,
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcTolerancePerPodCMExternal ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 3 ,
metric : & metricInfo {
name : "qps" ,
levels : [ ] int64 { 8600 } ,
perPodTargetUtilization : 2900 ,
expectedUtilization : 2867 ,
selector : & metav1 . LabelSelector { MatchLabels : map [ string ] string { "label" : "value" } } ,
} ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
}
tc . runTest ( t )
}
func TestReplicaCalcSuperfluousMetrics ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 4 ,
expectedReplicas : 24 ,
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 4000 , 9500 , 3000 , 7000 , 3200 , 2000 } ,
targetUtilization : 100 ,
expectedUtilization : 587 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 5875 ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcMissingMetrics ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 4 ,
expectedReplicas : 3 ,
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 400 , 95 } ,
targetUtilization : 100 ,
expectedUtilization : 24 ,
2016-12-02 20:18:26 +00:00
expectedValue : 495 , // numContainersPerPod * 247, for sufficiently large values of 247
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcEmptyMetrics ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 4 ,
2018-01-02 13:48:03 +00:00
expectedError : fmt . Errorf ( "unable to get metrics for resource cpu: no metrics returned from resource metrics API" ) ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { } ,
targetUtilization : 100 ,
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcEmptyCPURequest ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 1 ,
expectedError : fmt . Errorf ( "missing request for" ) ,
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
requests : [ ] resource . Quantity { } ,
levels : [ ] int64 { 200 } ,
targetUtilization : 100 ,
} ,
}
tc . runTest ( t )
}
2016-11-10 16:14:18 +00:00
func TestReplicaCalcMissingMetricsNoChangeEq ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 2 ,
expectedReplicas : 2 ,
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
2016-11-10 16:14:18 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 1000 } ,
targetUtilization : 100 ,
expectedUtilization : 100 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 1000 ,
2016-11-10 16:14:18 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcMissingMetricsNoChangeGt ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 2 ,
expectedReplicas : 2 ,
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
2016-11-10 16:14:18 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 1900 } ,
targetUtilization : 100 ,
expectedUtilization : 190 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 1900 ,
2016-11-10 16:14:18 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcMissingMetricsNoChangeLt ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 2 ,
expectedReplicas : 2 ,
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
2016-11-10 16:14:18 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 600 } ,
targetUtilization : 100 ,
expectedUtilization : 60 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 600 ,
2016-11-10 16:14:18 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcMissingMetricsUnreadyNoChange ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 3 ,
2016-11-18 20:50:17 +00:00
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionFalse , v1 . ConditionTrue , v1 . ConditionTrue } ,
2016-11-10 16:14:18 +00:00
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
2016-11-10 16:14:18 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 100 , 450 } ,
targetUtilization : 50 ,
expectedUtilization : 45 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 450 ,
2016-11-10 16:14:18 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcMissingMetricsUnreadyScaleUp ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 4 ,
2016-11-18 20:50:17 +00:00
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionFalse , v1 . ConditionTrue , v1 . ConditionTrue } ,
2016-11-10 16:14:18 +00:00
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
2016-11-10 16:14:18 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 100 , 2000 } ,
targetUtilization : 50 ,
expectedUtilization : 200 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 2000 ,
2016-11-10 16:14:18 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcMissingMetricsUnreadyScaleDown ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 4 ,
expectedReplicas : 3 ,
2016-11-18 20:50:17 +00:00
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionFalse , v1 . ConditionTrue , v1 . ConditionTrue , v1 . ConditionTrue } ,
2016-11-10 16:14:18 +00:00
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
2016-11-10 16:14:18 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 100 , 100 , 100 } ,
targetUtilization : 50 ,
expectedUtilization : 10 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 100 ,
2016-11-10 16:14:18 +00:00
} ,
}
tc . runTest ( t )
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
// TestComputedToleranceAlgImplementation is a regression test which
// back-calculates a minimal percentage for downscaling based on a small percentage
// increase in pod utilization which is calibrated against the tolerance value.
func TestReplicaCalcComputedToleranceAlgImplementation ( t * testing . T ) {
startPods := int32 ( 10 )
// 150 mCPU per pod.
totalUsedCPUOfAllPods := int64 ( startPods * 150 )
// Each pod starts out asking for 2X what is really needed.
// This means we will have a 50% ratio of used/requested
totalRequestedCPUOfAllPods := int32 ( 2 * totalUsedCPUOfAllPods )
requestedToUsed := float64 ( totalRequestedCPUOfAllPods / int32 ( totalUsedCPUOfAllPods ) )
// Spread the amount we ask over 10 pods. We can add some jitter later in reportedLevels.
perPodRequested := totalRequestedCPUOfAllPods / startPods
// Force a minimal scaling event by satisfying (tolerance < 1 - resourcesUsedRatio).
2017-09-11 13:59:53 +00:00
target := math . Abs ( 1 / ( requestedToUsed * ( 1 - defaultTestingTolerance ) ) ) + .01
2017-09-09 18:53:34 +00:00
finalCPUPercentTarget := int32 ( target * 100 )
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
resourcesUsedRatio := float64 ( totalUsedCPUOfAllPods ) / float64 ( float64 ( totalRequestedCPUOfAllPods ) * target )
// i.e. .60 * 20 -> scaled down expectation.
finalPods := int32 ( math . Ceil ( resourcesUsedRatio * float64 ( startPods ) ) )
// To breach tolerance we will create a utilization ratio difference of tolerance to usageRatioToleranceValue)
tc := replicaCalcTestCase {
currentReplicas : startPods ,
expectedReplicas : finalPods ,
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
levels : [ ] int64 {
totalUsedCPUOfAllPods / 10 ,
totalUsedCPUOfAllPods / 10 ,
totalUsedCPUOfAllPods / 10 ,
totalUsedCPUOfAllPods / 10 ,
totalUsedCPUOfAllPods / 10 ,
totalUsedCPUOfAllPods / 10 ,
totalUsedCPUOfAllPods / 10 ,
totalUsedCPUOfAllPods / 10 ,
totalUsedCPUOfAllPods / 10 ,
totalUsedCPUOfAllPods / 10 ,
} ,
requests : [ ] resource . Quantity {
resource . MustParse ( fmt . Sprint ( perPodRequested + 100 ) + "m" ) ,
resource . MustParse ( fmt . Sprint ( perPodRequested - 100 ) + "m" ) ,
resource . MustParse ( fmt . Sprint ( perPodRequested + 10 ) + "m" ) ,
resource . MustParse ( fmt . Sprint ( perPodRequested - 10 ) + "m" ) ,
resource . MustParse ( fmt . Sprint ( perPodRequested + 2 ) + "m" ) ,
resource . MustParse ( fmt . Sprint ( perPodRequested - 2 ) + "m" ) ,
resource . MustParse ( fmt . Sprint ( perPodRequested + 1 ) + "m" ) ,
resource . MustParse ( fmt . Sprint ( perPodRequested - 1 ) + "m" ) ,
resource . MustParse ( fmt . Sprint ( perPodRequested ) + "m" ) ,
resource . MustParse ( fmt . Sprint ( perPodRequested ) + "m" ) ,
} ,
2017-09-09 18:53:34 +00:00
targetUtilization : finalCPUPercentTarget ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
expectedUtilization : int32 ( totalUsedCPUOfAllPods * 100 ) / totalRequestedCPUOfAllPods ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * totalUsedCPUOfAllPods / 10 ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
// Reuse the data structure above, now testing "unscaling".
// Now, we test that no scaling happens if we are in a very close margin to the tolerance
2017-09-11 13:59:53 +00:00
target = math . Abs ( 1 / ( requestedToUsed * ( 1 - defaultTestingTolerance ) ) ) + .004
2017-09-09 18:53:34 +00:00
finalCPUPercentTarget = int32 ( target * 100 )
tc . resource . targetUtilization = finalCPUPercentTarget
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
tc . currentReplicas = startPods
tc . expectedReplicas = startPods
tc . runTest ( t )
}
// TODO: add more tests