HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
/ *
Copyright 2016 The Kubernetes Authors .
Licensed under the Apache License , Version 2.0 ( the "License" ) ;
you may not use this file except in compliance with the License .
You may obtain a copy of the License at
http : //www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing , software
distributed under the License is distributed on an "AS IS" BASIS ,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND , either express or implied .
See the License for the specific language governing permissions and
limitations under the License .
* /
package podautoscaler
import (
"fmt"
"math"
"testing"
"time"
2018-06-28 18:28:13 +00:00
autoscalingv2 "k8s.io/api/autoscaling/v2beta2"
2017-06-22 18:24:23 +00:00
"k8s.io/api/core/v1"
2018-04-26 15:55:50 +00:00
"k8s.io/apimachinery/pkg/api/meta/testrestmapper"
2017-01-25 13:13:07 +00:00
"k8s.io/apimachinery/pkg/api/resource"
2017-01-11 14:09:48 +00:00
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
2018-06-28 18:28:13 +00:00
"k8s.io/apimachinery/pkg/labels"
2017-01-11 14:09:48 +00:00
"k8s.io/apimachinery/pkg/runtime"
2017-02-20 06:17:16 +00:00
"k8s.io/apimachinery/pkg/runtime/schema"
2018-08-08 13:00:17 +00:00
"k8s.io/apimachinery/pkg/util/sets"
2018-09-04 18:16:48 +00:00
"k8s.io/client-go/informers"
2017-06-23 20:56:37 +00:00
"k8s.io/client-go/kubernetes/fake"
2017-01-25 20:07:10 +00:00
core "k8s.io/client-go/testing"
2017-10-16 11:41:50 +00:00
"k8s.io/kubernetes/pkg/api/legacyscheme"
2018-09-04 18:16:48 +00:00
"k8s.io/kubernetes/pkg/controller"
2018-08-08 13:00:17 +00:00
metricsclient "k8s.io/kubernetes/pkg/controller/podautoscaler/metrics"
2018-06-28 18:28:13 +00:00
cmapi "k8s.io/metrics/pkg/apis/custom_metrics/v1beta2"
2018-02-21 10:19:51 +00:00
emapi "k8s.io/metrics/pkg/apis/external_metrics/v1beta1"
2017-08-30 18:53:13 +00:00
metricsapi "k8s.io/metrics/pkg/apis/metrics/v1beta1"
2018-06-29 19:17:38 +00:00
metricsfake "k8s.io/metrics/pkg/client/clientset/versioned/fake"
2017-10-16 11:41:50 +00:00
cmfake "k8s.io/metrics/pkg/client/custom_metrics/fake"
2018-02-21 18:05:26 +00:00
emfake "k8s.io/metrics/pkg/client/external_metrics/fake"
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
type resourceInfo struct {
2016-11-18 20:50:17 +00:00
name v1 . ResourceName
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
requests [ ] resource . Quantity
levels [ ] int64
2017-01-10 22:26:13 +00:00
// only applies to pod names returned from "heapster"
podNames [ ] string
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
targetUtilization int32
expectedUtilization int32
2016-12-02 20:18:26 +00:00
expectedValue int64
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
}
2018-07-06 15:12:13 +00:00
type metricType int
const (
objectMetric metricType = iota
externalMetric
externalPerPodMetric
podMetric
)
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
type metricInfo struct {
2017-02-20 06:17:16 +00:00
name string
levels [ ] int64
singleObject * autoscalingv2 . CrossVersionObjectReference
2018-02-21 10:19:51 +00:00
selector * metav1 . LabelSelector
2018-07-06 15:12:13 +00:00
metricType metricType
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
2018-02-21 10:19:51 +00:00
targetUtilization int64
perPodTargetUtilization int64
expectedUtilization int64
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
}
type replicaCalcTestCase struct {
currentReplicas int32
expectedReplicas int32
expectedError error
timestamp time . Time
2018-06-28 18:28:13 +00:00
resource * resourceInfo
metric * metricInfo
metricLabelSelector labels . Selector
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
2018-08-28 16:27:47 +00:00
podReadiness [ ] v1 . ConditionStatus
podStartTime [ ] metav1 . Time
podPhase [ ] v1 . PodPhase
podDeletionTimestamp [ ] bool
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
}
const (
2016-12-02 20:18:26 +00:00
testNamespace = "test-namespace"
podNamePrefix = "test-pod"
numContainersPerPod = 2
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
)
2018-07-06 11:19:28 +00:00
func ( tc * replicaCalcTestCase ) prepareTestClientSet ( ) * fake . Clientset {
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
fakeClient := & fake . Clientset { }
fakeClient . AddReactor ( "list" , "pods" , func ( action core . Action ) ( handled bool , ret runtime . Object , err error ) {
2016-11-18 20:50:17 +00:00
obj := & v1 . PodList { }
2018-02-20 18:14:43 +00:00
podsCount := int ( tc . currentReplicas )
// Failed pods are not included in tc.currentReplicas
if tc . podPhase != nil && len ( tc . podPhase ) > podsCount {
podsCount = len ( tc . podPhase )
}
for i := 0 ; i < podsCount ; i ++ {
2016-11-18 20:50:17 +00:00
podReadiness := v1 . ConditionTrue
2018-02-20 18:14:43 +00:00
if tc . podReadiness != nil && i < len ( tc . podReadiness ) {
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
podReadiness = tc . podReadiness [ i ]
}
2018-08-08 13:00:17 +00:00
var podStartTime metav1 . Time
if tc . podStartTime != nil {
podStartTime = tc . podStartTime [ i ]
}
2018-02-20 18:14:43 +00:00
podPhase := v1 . PodRunning
if tc . podPhase != nil {
podPhase = tc . podPhase [ i ]
}
2018-08-28 16:27:47 +00:00
podDeletionTimestamp := false
if tc . podDeletionTimestamp != nil {
podDeletionTimestamp = tc . podDeletionTimestamp [ i ]
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
podName := fmt . Sprintf ( "%s-%d" , podNamePrefix , i )
2016-11-18 20:50:17 +00:00
pod := v1 . Pod {
Status : v1 . PodStatus {
2018-08-08 13:00:17 +00:00
Phase : podPhase ,
StartTime : & podStartTime ,
2016-11-18 20:50:17 +00:00
Conditions : [ ] v1 . PodCondition {
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
{
2016-11-18 20:50:17 +00:00
Type : v1 . PodReady ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
Status : podReadiness ,
} ,
} ,
} ,
2017-01-17 03:38:19 +00:00
ObjectMeta : metav1 . ObjectMeta {
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
Name : podName ,
Namespace : testNamespace ,
Labels : map [ string ] string {
"name" : podNamePrefix ,
} ,
} ,
2016-11-18 20:50:17 +00:00
Spec : v1 . PodSpec {
Containers : [ ] v1 . Container { { } , { } } ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
2018-08-28 16:27:47 +00:00
if podDeletionTimestamp {
pod . DeletionTimestamp = & metav1 . Time { Time : time . Now ( ) }
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
if tc . resource != nil && i < len ( tc . resource . requests ) {
2016-11-18 20:50:17 +00:00
pod . Spec . Containers [ 0 ] . Resources = v1 . ResourceRequirements {
Requests : v1 . ResourceList {
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
tc . resource . name : tc . resource . requests [ i ] ,
} ,
}
2016-11-18 20:50:17 +00:00
pod . Spec . Containers [ 1 ] . Resources = v1 . ResourceRequirements {
Requests : v1 . ResourceList {
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
tc . resource . name : tc . resource . requests [ i ] ,
} ,
}
}
obj . Items = append ( obj . Items , pod )
}
return true , obj , nil
} )
2018-07-06 11:19:28 +00:00
return fakeClient
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
2018-07-06 11:19:28 +00:00
func ( tc * replicaCalcTestCase ) prepareTestMetricsClient ( ) * metricsfake . Clientset {
2017-02-20 06:17:16 +00:00
fakeMetricsClient := & metricsfake . Clientset { }
// NB: we have to sound like Gollum due to gengo's inability to handle already-plural resource names
2017-05-03 22:11:22 +00:00
fakeMetricsClient . AddReactor ( "list" , "pods" , func ( action core . Action ) ( handled bool , ret runtime . Object , err error ) {
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
if tc . resource != nil {
2017-02-20 06:17:16 +00:00
metrics := & metricsapi . PodMetricsList { }
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
for i , resValue := range tc . resource . levels {
2017-01-10 22:26:13 +00:00
podName := fmt . Sprintf ( "%s-%d" , podNamePrefix , i )
if len ( tc . resource . podNames ) > i {
podName = tc . resource . podNames [ i ]
}
2017-02-20 06:17:16 +00:00
// NB: the list reactor actually does label selector filtering for us,
// so we have to make sure our results match the label selector
2016-11-30 07:27:27 +00:00
podMetric := metricsapi . PodMetrics {
2017-02-20 06:17:16 +00:00
ObjectMeta : metav1 . ObjectMeta {
2017-01-10 22:26:13 +00:00
Name : podName ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
Namespace : testNamespace ,
2017-02-20 06:17:16 +00:00
Labels : map [ string ] string { "name" : podNamePrefix } ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
2017-02-20 06:17:16 +00:00
Timestamp : metav1 . Time { Time : tc . timestamp } ,
2018-08-28 13:04:05 +00:00
Window : metav1 . Duration { Duration : time . Minute } ,
2016-12-02 20:18:26 +00:00
Containers : make ( [ ] metricsapi . ContainerMetrics , numContainersPerPod ) ,
}
for i := 0 ; i < numContainersPerPod ; i ++ {
podMetric . Containers [ i ] = metricsapi . ContainerMetrics {
Name : fmt . Sprintf ( "container%v" , i ) ,
2017-07-15 05:25:54 +00:00
Usage : v1 . ResourceList {
v1 . ResourceName ( tc . resource . name ) : * resource . NewMilliQuantity (
2016-12-02 20:18:26 +00:00
int64 ( resValue ) ,
resource . DecimalSI ) ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
2016-12-02 20:18:26 +00:00
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
}
metrics . Items = append ( metrics . Items , podMetric )
}
2017-02-20 06:17:16 +00:00
return true , metrics , nil
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
2017-02-20 06:17:16 +00:00
return true , nil , fmt . Errorf ( "no pod resource metrics specified in test client" )
} )
2018-07-06 11:19:28 +00:00
return fakeMetricsClient
}
2017-02-20 06:17:16 +00:00
2018-07-06 11:19:28 +00:00
func ( tc * replicaCalcTestCase ) prepareTestCMClient ( t * testing . T ) * cmfake . FakeCustomMetricsClient {
2017-02-20 06:17:16 +00:00
fakeCMClient := & cmfake . FakeCustomMetricsClient { }
fakeCMClient . AddReactor ( "get" , "*" , func ( action core . Action ) ( handled bool , ret runtime . Object , err error ) {
getForAction , wasGetFor := action . ( cmfake . GetForAction )
if ! wasGetFor {
return true , nil , fmt . Errorf ( "expected a get-for action, got %v instead" , action )
}
if tc . metric == nil {
return true , nil , fmt . Errorf ( "no custom metrics specified in test client" )
}
assert . Equal ( t , tc . metric . name , getForAction . GetMetricName ( ) , "the metric requested should have matched the one specified" )
if getForAction . GetName ( ) == "*" {
metrics := cmapi . MetricValueList { }
// multiple objects
assert . Equal ( t , "pods" , getForAction . GetResource ( ) . Resource , "the type of object that we requested multiple metrics for should have been pods" )
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
for i , level := range tc . metric . levels {
2017-02-20 06:17:16 +00:00
podMetric := cmapi . MetricValue {
2017-07-15 05:25:54 +00:00
DescribedObject : v1 . ObjectReference {
2017-02-20 06:17:16 +00:00
Kind : "Pod" ,
Name : fmt . Sprintf ( "%s-%d" , podNamePrefix , i ) ,
Namespace : testNamespace ,
} ,
2018-06-28 18:28:13 +00:00
Timestamp : metav1 . Time { Time : tc . timestamp } ,
Metric : cmapi . MetricIdentifier {
Name : tc . metric . name ,
} ,
Value : * resource . NewMilliQuantity ( level , resource . DecimalSI ) ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
}
2017-02-20 06:17:16 +00:00
metrics . Items = append ( metrics . Items , podMetric )
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
2017-02-20 06:17:16 +00:00
return true , & metrics , nil
2017-09-09 18:53:34 +00:00
}
name := getForAction . GetName ( )
2018-05-07 12:32:20 +00:00
mapper := testrestmapper . TestOnlyStaticRESTMapper ( legacyscheme . Scheme )
2017-09-09 18:53:34 +00:00
metrics := & cmapi . MetricValueList { }
assert . NotNil ( t , tc . metric . singleObject , "should have only requested a single-object metric when calling GetObjectMetricReplicas" )
gk := schema . FromAPIVersionAndKind ( tc . metric . singleObject . APIVersion , tc . metric . singleObject . Kind ) . GroupKind ( )
mapping , err := mapper . RESTMapping ( gk )
if err != nil {
return true , nil , fmt . Errorf ( "unable to get mapping for %s: %v" , gk . String ( ) , err )
}
2018-05-01 17:02:44 +00:00
groupResource := mapping . Resource . GroupResource ( )
2017-02-20 06:17:16 +00:00
2017-09-09 18:53:34 +00:00
assert . Equal ( t , groupResource . String ( ) , getForAction . GetResource ( ) . Resource , "should have requested metrics for the resource matching the GroupKind passed in" )
assert . Equal ( t , tc . metric . singleObject . Name , name , "should have requested metrics for the object matching the name passed in" )
2017-02-20 06:17:16 +00:00
2017-09-09 18:53:34 +00:00
metrics . Items = [ ] cmapi . MetricValue {
{
DescribedObject : v1 . ObjectReference {
Kind : tc . metric . singleObject . Kind ,
APIVersion : tc . metric . singleObject . APIVersion ,
Name : name ,
2017-02-20 06:17:16 +00:00
} ,
2018-06-28 18:28:13 +00:00
Timestamp : metav1 . Time { Time : tc . timestamp } ,
Metric : cmapi . MetricIdentifier {
Name : tc . metric . name ,
} ,
Value : * resource . NewMilliQuantity ( int64 ( tc . metric . levels [ 0 ] ) , resource . DecimalSI ) ,
2017-09-09 18:53:34 +00:00
} ,
2017-02-20 06:17:16 +00:00
}
2017-09-09 18:53:34 +00:00
return true , metrics , nil
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} )
2018-07-06 11:19:28 +00:00
return fakeCMClient
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
2018-07-06 11:19:28 +00:00
func ( tc * replicaCalcTestCase ) prepareTestEMClient ( t * testing . T ) * emfake . FakeExternalMetricsClient {
2018-02-21 18:05:26 +00:00
fakeEMClient := & emfake . FakeExternalMetricsClient { }
2018-02-21 10:19:51 +00:00
fakeEMClient . AddReactor ( "list" , "*" , func ( action core . Action ) ( handled bool , ret runtime . Object , err error ) {
listAction , wasList := action . ( core . ListAction )
if ! wasList {
return true , nil , fmt . Errorf ( "expected a list-for action, got %v instead" , action )
}
if t c . metric == nil {
return true , nil , fmt . Errorf ( "no external metrics specified in test client" )
}
assert . Equal ( t , tc . metric . name , listAction . GetResource ( ) . Resource , "the metric requested should have matched the one specified" )
selector , err := metav1 . LabelSelectorAsSelector ( tc . metric . selector )
if err != nil {
return true , nil , fmt . Errorf ( "failed to convert label selector specified in test client" )
}
assert . Equal ( t , selector , listAction . GetListRestrictions ( ) . Labels , "the metric selector should have matched the one specified" )
metrics := emapi . ExternalMetricValueList { }
for _ , level := range tc . metric . levels {
metric := emapi . ExternalMetricValue {
Timestamp : metav1 . Time { Time : tc . timestamp } ,
MetricName : tc . metric . name ,
Value : * resource . NewMilliQuantity ( level , resource . DecimalSI ) ,
}
metrics . Items = append ( metrics . Items , metric )
}
return true , & metrics , nil
} )
2018-07-06 11:19:28 +00:00
return fakeEMClient
}
2018-02-21 18:05:26 +00:00
2018-07-06 11:19:28 +00:00
func ( tc * replicaCalcTestCase ) prepareTestClient ( t * testing . T ) ( * fake . Clientset , * metricsfake . Clientset , * cmfake . FakeCustomMetricsClient , * emfake . FakeExternalMetricsClient ) {
fakeClient := tc . prepareTestClientSet ( )
fakeMetricsClient := tc . prepareTestMetricsClient ( )
fakeCMClient := tc . prepareTestCMClient ( t )
fakeEMClient := tc . prepareTestEMClient ( t )
2018-02-21 18:05:26 +00:00
return fakeClient , fakeMetricsClient , fakeCMClient , fakeEMClient
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
}
func ( tc * replicaCalcTestCase ) runTest ( t * testing . T ) {
2018-02-21 18:05:26 +00:00
testClient , testMetricsClient , testCMClient , testEMClient := tc . prepareTestClient ( t )
2018-06-28 18:28:13 +00:00
metricsClient := metricsclient . NewRESTMetricsClient ( testMetricsClient . MetricsV1beta1 ( ) , testCMClient , testEMClient )
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
2018-09-04 18:16:48 +00:00
informerFactory := informers . NewSharedInformerFactory ( testClient , controller . NoResyncPeriodFunc ( ) )
informer := informerFactory . Core ( ) . V1 ( ) . Pods ( )
replicaCalc := NewReplicaCalculator ( metricsClient , informer . Lister ( ) , defaultTestingTolerance , defaultTestingCpuInitializationPeriod , defaultTestingDelayOfInitialReadinessStatus )
stop := make ( chan struct { } )
defer close ( stop )
informerFactory . Start ( stop )
if ! controller . WaitForCacheSync ( "HPA" , stop , informer . Informer ( ) . HasSynced ) {
return
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
2016-12-03 18:57:26 +00:00
selector , err := metav1 . LabelSelectorAsSelector ( & metav1 . LabelSelector {
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
MatchLabels : map [ string ] string { "name" : podNamePrefix } ,
} )
if err != nil {
require . Nil ( t , err , "something went horribly wrong..." )
}
if tc . resource != nil {
2016-12-02 20:18:26 +00:00
outReplicas , outUtilization , outRawValue , outTimestamp , err := replicaCalc . GetResourceReplicas ( tc . currentReplicas , tc . resource . targetUtilization , tc . resource . name , testNamespace , selector )
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
if tc . expectedError != nil {
require . Error ( t , err , "there should be an error calculating the replica count" )
assert . Contains ( t , err . Error ( ) , tc . expectedError . Error ( ) , "the error message should have contained the expected error message" )
return
}
require . NoError ( t , err , "there should not have been an error calculating the replica count" )
assert . Equal ( t , tc . expectedReplicas , outReplicas , "replicas should be as expected" )
assert . Equal ( t , tc . resource . expectedUtilization , outUtilization , "utilization should be as expected" )
2016-12-02 20:18:26 +00:00
assert . Equal ( t , tc . resource . expectedValue , outRawValue , "raw value should be as expected" )
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
assert . True ( t , tc . timestamp . Equal ( outTimestamp ) , "timestamp should be as expected" )
2018-07-06 15:12:13 +00:00
return
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
2018-07-06 15:12:13 +00:00
var outReplicas int32
var outUtilization int64
var outTimestamp time . Time
switch tc . metric . metricType {
case objectMetric :
if tc . metric . singleObject == nil {
t . Fatal ( "Metric specified as objectMetric but metric.singleObject is nil." )
2017-02-20 06:17:16 +00:00
}
2018-06-28 18:28:13 +00:00
outReplicas , outUtilization , outTimestamp , err = replicaCalc . GetObjectMetricReplicas ( tc . currentReplicas , tc . metric . targetUtilization , tc . metric . name , testNamespace , tc . metric . singleObject , selector , nil )
2018-07-06 15:12:13 +00:00
case externalMetric :
if tc . metric . selector == nil {
t . Fatal ( "Metric specified as externalMetric but metric.selector is nil." )
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
}
2018-07-06 15:12:13 +00:00
if tc . metric . targetUtilization <= 0 {
t . Fatalf ( "Metric specified as externalMetric but metric.targetUtilization is %d which is <=0." , tc . metric . targetUtilization )
}
outReplicas , outUtilization , outTimestamp , err = replicaCalc . GetExternalMetricReplicas ( tc . currentReplicas , tc . metric . targetUtilization , tc . metric . name , testNamespace , tc . metric . selector , selector )
case externalPerPodMetric :
if tc . metric . selector == nil {
t . Fatal ( "Metric specified as externalPerPodMetric but metric.selector is nil." )
}
if tc . metric . perPodTargetUtilization <= 0 {
t . Fatalf ( "Metric specified as externalPerPodMetric but metric.perPodTargetUtilization is %d which is <=0." , tc . metric . perPodTargetUtilization )
}
outReplicas , outUtilization , outTimestamp , err = replicaCalc . GetExternalPerPodMetricReplicas ( tc . currentReplicas , tc . metric . perPodTargetUtilization , tc . metric . name , testNamespace , tc . metric . selector )
case podMetric :
2018-06-28 18:28:13 +00:00
outReplicas , outUtilization , outTimestamp , err = replicaCalc . GetMetricReplicas ( tc . currentReplicas , tc . metric . targetUtilization , tc . metric . name , testNamespace , selector , nil )
2018-07-06 15:12:13 +00:00
default :
t . Fatalf ( "Unknown metric type: %d" , tc . metric . metricType )
}
if tc . expectedError != nil {
require . Error ( t , err , "there should be an error calculating the replica count" )
assert . Contains ( t , err . Error ( ) , tc . expectedError . Error ( ) , "the error message should have contained the expected error message" )
return
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
}
2018-07-06 15:12:13 +00:00
require . NoError ( t , err , "there should not have been an error calculating the replica count" )
assert . Equal ( t , tc . expectedReplicas , outReplicas , "replicas should be as expected" )
assert . Equal ( t , tc . metric . expectedUtilization , outUtilization , "utilization should be as expected" )
assert . True ( t , tc . timestamp . Equal ( outTimestamp ) , "timestamp should be as expected" )
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
}
2017-01-10 22:26:13 +00:00
func TestReplicaCalcDisjointResourcesMetrics ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 1 ,
expectedError : fmt . Errorf ( "no metrics returned matched known pods" ) ,
resource : & resourceInfo {
name : v1 . ResourceCPU ,
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 100 } ,
podNames : [ ] string { "an-older-pod-name" } ,
targetUtilization : 100 ,
} ,
}
tc . runTest ( t )
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
func TestReplicaCalcScaleUp ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 5 ,
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 300 , 500 , 700 } ,
targetUtilization : 30 ,
expectedUtilization : 50 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 500 ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcScaleUpUnreadyLessScale ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 4 ,
2016-11-18 20:50:17 +00:00
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionFalse , v1 . ConditionTrue , v1 . ConditionTrue } ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 300 , 500 , 700 } ,
targetUtilization : 30 ,
expectedUtilization : 60 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 600 ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
2018-08-08 13:00:17 +00:00
func TestReplicaCalcScaleUpHotCpuLessScale ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 4 ,
podStartTime : [ ] metav1 . Time { hotCpuCreationTime ( ) , coolCpuCreationTime ( ) , coolCpuCreationTime ( ) } ,
resource : & resourceInfo {
name : v1 . ResourceCPU ,
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 300 , 500 , 700 } ,
targetUtilization : 30 ,
expectedUtilization : 60 ,
expectedValue : numContainersPerPod * 600 ,
} ,
}
tc . runTest ( t )
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
func TestReplicaCalcScaleUpUnreadyNoScale ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 3 ,
2016-11-18 20:50:17 +00:00
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionTrue , v1 . ConditionFalse , v1 . ConditionFalse } ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 400 , 500 , 700 } ,
targetUtilization : 30 ,
expectedUtilization : 40 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 400 ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
2018-08-08 13:00:17 +00:00
func TestReplicaCalcScaleHotCpuNoScale ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 3 ,
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionTrue , v1 . ConditionFalse , v1 . ConditionFalse } ,
podStartTime : [ ] metav1 . Time { coolCpuCreationTime ( ) , hotCpuCreationTime ( ) , hotCpuCreationTime ( ) } ,
resource : & resourceInfo {
name : v1 . ResourceCPU ,
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 400 , 500 , 700 } ,
targetUtilization : 30 ,
expectedUtilization : 40 ,
expectedValue : numContainersPerPod * 400 ,
} ,
}
tc . runTest ( t )
}
2018-02-20 18:14:43 +00:00
func TestReplicaCalcScaleUpIgnoresFailedPods ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 2 ,
expectedReplicas : 4 ,
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionTrue , v1 . ConditionTrue , v1 . ConditionFalse , v1 . ConditionFalse } ,
podPhase : [ ] v1 . PodPhase { v1 . PodRunning , v1 . PodRunning , v1 . PodFailed , v1 . PodFailed } ,
resource : & resourceInfo {
name : v1 . ResourceCPU ,
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 500 , 700 } ,
targetUtilization : 30 ,
expectedUtilization : 60 ,
expectedValue : numContainersPerPod * 600 ,
} ,
}
tc . runTest ( t )
}
2018-08-28 16:27:47 +00:00
func TestReplicaCalcScaleUpIgnoresDeletionPods ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 2 ,
expectedReplicas : 4 ,
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionTrue , v1 . ConditionTrue , v1 . ConditionFalse , v1 . ConditionFalse } ,
podPhase : [ ] v1 . PodPhase { v1 . PodRunning , v1 . PodRunning , v1 . PodRunning , v1 . PodRunning } ,
podDeletionTimestamp : [ ] bool { false , false , true , true } ,
resource : & resourceInfo {
name : v1 . ResourceCPU ,
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 500 , 700 } ,
targetUtilization : 30 ,
expectedUtilization : 60 ,
expectedValue : numContainersPerPod * 600 ,
} ,
}
tc . runTest ( t )
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
func TestReplicaCalcScaleUpCM ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 4 ,
metric : & metricInfo {
name : "qps" ,
2017-02-20 06:17:16 +00:00
levels : [ ] int64 { 20000 , 10000 , 30000 } ,
2016-12-02 20:18:26 +00:00
targetUtilization : 15000 ,
expectedUtilization : 20000 ,
2018-07-06 15:12:13 +00:00
metricType : podMetric ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
2018-08-08 13:00:17 +00:00
func TestReplicaCalcScaleUpCMUnreadyHotCpuNoLessScale ( t * testing . T ) {
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
tc := replicaCalcTestCase {
currentReplicas : 3 ,
2018-08-08 13:00:17 +00:00
expectedReplicas : 6 ,
2016-11-18 20:50:17 +00:00
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionTrue , v1 . ConditionTrue , v1 . ConditionFalse } ,
2018-08-08 13:00:17 +00:00
podStartTime : [ ] metav1 . Time { coolCpuCreationTime ( ) , coolCpuCreationTime ( ) , hotCpuCreationTime ( ) } ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
metric : & metricInfo {
name : "qps" ,
2017-02-20 06:17:16 +00:00
levels : [ ] int64 { 50000 , 10000 , 30000 } ,
2016-12-02 20:18:26 +00:00
targetUtilization : 15000 ,
expectedUtilization : 30000 ,
2018-07-06 15:12:13 +00:00
metricType : podMetric ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
2018-08-08 13:00:17 +00:00
func TestReplicaCalcScaleUpCMUnreadyHotCpuScaleWouldScaleDown ( t * testing . T ) {
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
tc := replicaCalcTestCase {
currentReplicas : 3 ,
2018-08-08 13:00:17 +00:00
expectedReplicas : 7 ,
2016-11-18 20:50:17 +00:00
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionFalse , v1 . ConditionTrue , v1 . ConditionFalse } ,
2018-08-08 13:00:17 +00:00
podStartTime : [ ] metav1 . Time { hotCpuCreationTime ( ) , coolCpuCreationTime ( ) , hotCpuCreationTime ( ) } ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
metric : & metricInfo {
name : "qps" ,
2017-02-20 06:17:16 +00:00
levels : [ ] int64 { 50000 , 15000 , 30000 } ,
2016-12-02 20:18:26 +00:00
targetUtilization : 15000 ,
2018-08-08 13:00:17 +00:00
expectedUtilization : 31666 ,
2018-07-06 15:12:13 +00:00
metricType : podMetric ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
2017-02-20 06:17:16 +00:00
func TestReplicaCalcScaleUpCMObject ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 4 ,
metric : & metricInfo {
name : "qps" ,
levels : [ ] int64 { 20000 } ,
targetUtilization : 15000 ,
expectedUtilization : 20000 ,
singleObject : & autoscalingv2 . CrossVersionObjectReference {
`GetObjectMetricReplicas` ignores unready pods
Previously, when `GetObjectMetricReplicas` calculated the desired
replica count, it multiplied the usage ratio by the current number of replicas.
This method caused over-scaling when there were pods that were not ready
for a long period of time. For example, if there were pods A, B, and C,
and only pod A was ready, and the usage ratio was 500%, we would
previously specify 15 pods as the desired replicas (even though really
only one pod was handling the load).
After this change, we now multiple the usage
ratio by the number of ready pods for `GetObjectMetricReplicas`.
In the example above, we'd only desire 5 replica pods.
This change gives `GetObjectMetricReplicas` the same behavior as the
other replica calculator methods. Only `GetExternalMetricReplicas` and
`GetExternalPerPodMetricRepliacs` still allow unready pods to impact the
number of desired replicas. I will fix this issue in the following
commit.
2018-03-06 14:51:49 +00:00
Kind : "Deployment" ,
APIVersion : "extensions/v1beta1" ,
Name : "some-deployment" ,
} ,
} ,
}
tc . runTest ( t )
}
2018-03-07 13:57:11 +00:00
func TestReplicaCalcScaleUpCMObjectIgnoresUnreadyPods ( t * testing . T ) {
`GetObjectMetricReplicas` ignores unready pods
Previously, when `GetObjectMetricReplicas` calculated the desired
replica count, it multiplied the usage ratio by the current number of replicas.
This method caused over-scaling when there were pods that were not ready
for a long period of time. For example, if there were pods A, B, and C,
and only pod A was ready, and the usage ratio was 500%, we would
previously specify 15 pods as the desired replicas (even though really
only one pod was handling the load).
After this change, we now multiple the usage
ratio by the number of ready pods for `GetObjectMetricReplicas`.
In the example above, we'd only desire 5 replica pods.
This change gives `GetObjectMetricReplicas` the same behavior as the
other replica calculator methods. Only `GetExternalMetricReplicas` and
`GetExternalPerPodMetricRepliacs` still allow unready pods to impact the
number of desired replicas. I will fix this issue in the following
commit.
2018-03-06 14:51:49 +00:00
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 5 , // If we did not ignore unready pods, we'd expect 15 replicas.
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionFalse , v1 . ConditionTrue , v1 . ConditionFalse } ,
metric : & metricInfo {
name : "qps" ,
levels : [ ] int64 { 50000 } ,
targetUtilization : 10000 ,
expectedUtilization : 50000 ,
singleObject : & autoscalingv2 . CrossVersionObjectReference {
2017-02-20 06:17:16 +00:00
Kind : "Deployment" ,
APIVersion : "extensions/v1beta1" ,
Name : "some-deployment" ,
} ,
} ,
}
tc . runTest ( t )
}
2018-02-21 10:19:51 +00:00
func TestReplicaCalcScaleUpCMExternal ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 1 ,
expectedReplicas : 2 ,
metric : & metricInfo {
name : "qps" ,
levels : [ ] int64 { 8600 } ,
targetUtilization : 4400 ,
expectedUtilization : 8600 ,
selector : & metav1 . LabelSelector { MatchLabels : map [ string ] string { "label" : "value" } } ,
2018-07-06 15:12:13 +00:00
metricType : podMetric ,
2018-02-21 10:19:51 +00:00
} ,
}
tc . runTest ( t )
}
2018-03-07 13:57:11 +00:00
func TestReplicaCalcScaleUpCMExternalIgnoresUnreadyPods ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 2 , // Would expect 6 if we didn't ignore unready pods
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionFalse , v1 . ConditionTrue , v1 . ConditionFalse } ,
metric : & metricInfo {
name : "qps" ,
levels : [ ] int64 { 8600 } ,
targetUtilization : 4400 ,
expectedUtilization : 8600 ,
selector : & metav1 . LabelSelector { MatchLabels : map [ string ] string { "label" : "value" } } ,
2018-07-06 15:12:13 +00:00
metricType : externalMetric ,
2018-03-07 13:57:11 +00:00
} ,
}
tc . runTest ( t )
}
2018-02-21 10:19:51 +00:00
func TestReplicaCalcScaleUpCMExternalNoLabels ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 1 ,
expectedReplicas : 2 ,
metric : & metricInfo {
name : "qps" ,
levels : [ ] int64 { 8600 } ,
targetUtilization : 4400 ,
expectedUtilization : 8600 ,
2018-07-06 15:12:13 +00:00
metricType : podMetric ,
2018-02-21 10:19:51 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcScaleUpPerPodCMExternal ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 4 ,
metric : & metricInfo {
name : "qps" ,
levels : [ ] int64 { 8600 } ,
perPodTargetUtilization : 2150 ,
expectedUtilization : 2867 ,
selector : & metav1 . LabelSelector { MatchLabels : map [ string ] string { "label" : "value" } } ,
2018-07-06 15:12:13 +00:00
metricType : externalPerPodMetric ,
2018-02-21 10:19:51 +00:00
} ,
}
tc . runTest ( t )
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
func TestReplicaCalcScaleDown ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 5 ,
expectedReplicas : 3 ,
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 100 , 300 , 500 , 250 , 250 } ,
targetUtilization : 50 ,
expectedUtilization : 28 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 280 ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcScaleDownCM ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 5 ,
expectedReplicas : 3 ,
metric : & metricInfo {
name : "qps" ,
2017-02-20 06:17:16 +00:00
levels : [ ] int64 { 12000 , 12000 , 12000 , 12000 , 12000 } ,
2016-12-02 20:18:26 +00:00
targetUtilization : 20000 ,
expectedUtilization : 12000 ,
2018-07-06 15:12:13 +00:00
metricType : podMetric ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
2017-02-20 06:17:16 +00:00
func TestReplicaCalcScaleDownCMObject ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 5 ,
expectedReplicas : 3 ,
metric : & metricInfo {
name : "qps" ,
levels : [ ] int64 { 12000 } ,
targetUtilization : 20000 ,
expectedUtilization : 12000 ,
singleObject : & autoscalingv2 . CrossVersionObjectReference {
Kind : "Deployment" ,
APIVersion : "extensions/v1beta1" ,
Name : "some-deployment" ,
} ,
} ,
}
tc . runTest ( t )
}
2018-02-21 10:19:51 +00:00
func TestReplicaCalcScaleDownCMExternal ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 5 ,
expectedReplicas : 3 ,
metric : & metricInfo {
name : "qps" ,
levels : [ ] int64 { 8600 } ,
targetUtilization : 14334 ,
expectedUtilization : 8600 ,
selector : & metav1 . LabelSelector { MatchLabels : map [ string ] string { "label" : "value" } } ,
2018-07-06 15:12:13 +00:00
metricType : externalMetric ,
2018-02-21 10:19:51 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcScaleDownPerPodCMExternal ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 5 ,
expectedReplicas : 3 ,
metric : & metricInfo {
name : "qps" ,
levels : [ ] int64 { 8600 } ,
perPodTargetUtilization : 2867 ,
expectedUtilization : 1720 ,
selector : & metav1 . LabelSelector { MatchLabels : map [ string ] string { "label" : "value" } } ,
2018-07-06 15:12:13 +00:00
metricType : externalPerPodMetric ,
2018-02-21 10:19:51 +00:00
} ,
}
tc . runTest ( t )
}
2018-08-08 13:00:17 +00:00
func TestReplicaCalcScaleDownIncludeUnreadyPods ( t * testing . T ) {
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
tc := replicaCalcTestCase {
currentReplicas : 5 ,
expectedReplicas : 2 ,
2016-11-18 20:50:17 +00:00
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionTrue , v1 . ConditionTrue , v1 . ConditionTrue , v1 . ConditionFalse , v1 . ConditionFalse } ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 100 , 300 , 500 , 250 , 250 } ,
targetUtilization : 50 ,
expectedUtilization : 30 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 300 ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
2018-08-08 13:00:17 +00:00
func TestReplicaCalcScaleDownIgnoreHotCpuPods ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 5 ,
expectedReplicas : 2 ,
podStartTime : [ ] metav1 . Time { coolCpuCreationTime ( ) , coolCpuCreationTime ( ) , coolCpuCreationTime ( ) , hotCpuCreationTime ( ) , hotCpuCreationTime ( ) } ,
resource : & resourceInfo {
name : v1 . ResourceCPU ,
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 100 , 300 , 500 , 250 , 250 } ,
targetUtilization : 50 ,
expectedUtilization : 30 ,
expectedValue : numContainersPerPod * 300 ,
} ,
}
tc . runTest ( t )
}
2018-02-20 18:14:43 +00:00
func TestReplicaCalcScaleDownIgnoresFailedPods ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 5 ,
expectedReplicas : 3 ,
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionTrue , v1 . ConditionTrue , v1 . ConditionTrue , v1 . ConditionTrue , v1 . ConditionTrue , v1 . ConditionFalse , v1 . ConditionFalse } ,
podPhase : [ ] v1 . PodPhase { v1 . PodRunning , v1 . PodRunning , v1 . PodRunning , v1 . PodRunning , v1 . PodRunning , v1 . PodFailed , v1 . PodFailed } ,
resource : & resourceInfo {
name : v1 . ResourceCPU ,
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 100 , 300 , 500 , 250 , 250 } ,
targetUtilization : 50 ,
expectedUtilization : 28 ,
expectedValue : numContainersPerPod * 280 ,
} ,
}
tc . runTest ( t )
}
2018-08-28 16:27:47 +00:00
func TestReplicaCalcScaleDownIgnoresDeletionPods ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 5 ,
expectedReplicas : 3 ,
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionTrue , v1 . ConditionTrue , v1 . ConditionTrue , v1 . ConditionTrue , v1 . ConditionTrue , v1 . ConditionFalse , v1 . ConditionFalse } ,
podPhase : [ ] v1 . PodPhase { v1 . PodRunning , v1 . PodRunning , v1 . PodRunning , v1 . PodRunning , v1 . PodRunning , v1 . PodRunning , v1 . PodRunning } ,
podDeletionTimestamp : [ ] bool { false , false , false , false , false , true , true } ,
resource : & resourceInfo {
name : v1 . ResourceCPU ,
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 100 , 300 , 500 , 250 , 250 } ,
targetUtilization : 50 ,
expectedUtilization : 28 ,
expectedValue : numContainersPerPod * 280 ,
} ,
}
tc . runTest ( t )
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
func TestReplicaCalcTolerance ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 3 ,
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "0.9" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.1" ) } ,
levels : [ ] int64 { 1010 , 1030 , 1020 } ,
targetUtilization : 100 ,
expectedUtilization : 102 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 1020 ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcToleranceCM ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 3 ,
metric : & metricInfo {
name : "qps" ,
2017-02-20 06:17:16 +00:00
levels : [ ] int64 { 20000 , 21000 , 21000 } ,
targetUtilization : 20000 ,
expectedUtilization : 20666 ,
2018-07-06 15:12:13 +00:00
metricType : podMetric ,
2017-02-20 06:17:16 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcToleranceCMObject ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 3 ,
metric : & metricInfo {
name : "qps" ,
levels : [ ] int64 { 20666 } ,
2016-12-02 20:18:26 +00:00
targetUtilization : 20000 ,
expectedUtilization : 20666 ,
2017-02-20 06:17:16 +00:00
singleObject : & autoscalingv2 . CrossVersionObjectReference {
Kind : "Deployment" ,
APIVersion : "extensions/v1beta1" ,
Name : "some-deployment" ,
} ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
2018-02-21 10:19:51 +00:00
}
tc . runTest ( t )
}
func TestReplicaCalcToleranceCMExternal ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 3 ,
metric : & metricInfo {
name : "qps" ,
levels : [ ] int64 { 8600 } ,
targetUtilization : 8888 ,
expectedUtilization : 8600 ,
selector : & metav1 . LabelSelector { MatchLabels : map [ string ] string { "label" : "value" } } ,
2018-07-06 15:12:13 +00:00
metricType : externalMetric ,
2018-02-21 10:19:51 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcTolerancePerPodCMExternal ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 3 ,
metric : & metricInfo {
name : "qps" ,
levels : [ ] int64 { 8600 } ,
perPodTargetUtilization : 2900 ,
expectedUtilization : 2867 ,
selector : & metav1 . LabelSelector { MatchLabels : map [ string ] string { "label" : "value" } } ,
2018-07-06 15:12:13 +00:00
metricType : externalPerPodMetric ,
2018-02-21 10:19:51 +00:00
} ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
}
tc . runTest ( t )
}
func TestReplicaCalcSuperfluousMetrics ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 4 ,
expectedReplicas : 24 ,
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 4000 , 9500 , 3000 , 7000 , 3200 , 2000 } ,
targetUtilization : 100 ,
expectedUtilization : 587 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 5875 ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcMissingMetrics ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 4 ,
expectedReplicas : 3 ,
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 400 , 95 } ,
targetUtilization : 100 ,
expectedUtilization : 24 ,
2016-12-02 20:18:26 +00:00
expectedValue : 495 , // numContainersPerPod * 247, for sufficiently large values of 247
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcEmptyMetrics ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 4 ,
2018-01-02 13:48:03 +00:00
expectedError : fmt . Errorf ( "unable to get metrics for resource cpu: no metrics returned from resource metrics API" ) ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { } ,
targetUtilization : 100 ,
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcEmptyCPURequest ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 1 ,
expectedError : fmt . Errorf ( "missing request for" ) ,
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
requests : [ ] resource . Quantity { } ,
levels : [ ] int64 { 200 } ,
targetUtilization : 100 ,
} ,
}
tc . runTest ( t )
}
2016-11-10 16:14:18 +00:00
func TestReplicaCalcMissingMetricsNoChangeEq ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 2 ,
expectedReplicas : 2 ,
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
2016-11-10 16:14:18 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 1000 } ,
targetUtilization : 100 ,
expectedUtilization : 100 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 1000 ,
2016-11-10 16:14:18 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcMissingMetricsNoChangeGt ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 2 ,
expectedReplicas : 2 ,
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
2016-11-10 16:14:18 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 1900 } ,
targetUtilization : 100 ,
expectedUtilization : 190 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 1900 ,
2016-11-10 16:14:18 +00:00
} ,
}
tc . runTest ( t )
}
func TestReplicaCalcMissingMetricsNoChangeLt ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 2 ,
expectedReplicas : 2 ,
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
2016-11-10 16:14:18 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 600 } ,
targetUtilization : 100 ,
expectedUtilization : 60 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 600 ,
2016-11-10 16:14:18 +00:00
} ,
}
tc . runTest ( t )
}
2018-08-08 13:00:17 +00:00
func TestReplicaCalcMissingMetricsUnreadyChange ( t * testing . T ) {
2016-11-10 16:14:18 +00:00
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 3 ,
2016-11-18 20:50:17 +00:00
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionFalse , v1 . ConditionTrue , v1 . ConditionTrue } ,
2016-11-10 16:14:18 +00:00
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
2016-11-10 16:14:18 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 100 , 450 } ,
targetUtilization : 50 ,
expectedUtilization : 45 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 450 ,
2016-11-10 16:14:18 +00:00
} ,
}
tc . runTest ( t )
}
2018-08-08 13:00:17 +00:00
func TestReplicaCalcMissingMetricsHotCpuNoChange ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 3 ,
podStartTime : [ ] metav1 . Time { hotCpuCreationTime ( ) , coolCpuCreationTime ( ) , coolCpuCreationTime ( ) } ,
resource : & resourceInfo {
name : v1 . ResourceCPU ,
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 100 , 450 } ,
targetUtilization : 50 ,
expectedUtilization : 45 ,
expectedValue : numContainersPerPod * 450 ,
} ,
}
tc . runTest ( t )
}
2016-11-10 16:14:18 +00:00
func TestReplicaCalcMissingMetricsUnreadyScaleUp ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 4 ,
2016-11-18 20:50:17 +00:00
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionFalse , v1 . ConditionTrue , v1 . ConditionTrue } ,
2016-11-10 16:14:18 +00:00
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
2016-11-10 16:14:18 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 100 , 2000 } ,
targetUtilization : 50 ,
expectedUtilization : 200 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 2000 ,
2016-11-10 16:14:18 +00:00
} ,
}
tc . runTest ( t )
}
2018-08-08 13:00:17 +00:00
func TestReplicaCalcMissingMetricsHotCpuScaleUp ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 3 ,
expectedReplicas : 4 ,
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionFalse , v1 . ConditionTrue , v1 . ConditionTrue } ,
podStartTime : [ ] metav1 . Time { hotCpuCreationTime ( ) , coolCpuCreationTime ( ) , coolCpuCreationTime ( ) } ,
resource : & resourceInfo {
name : v1 . ResourceCPU ,
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 100 , 2000 } ,
targetUtilization : 50 ,
expectedUtilization : 200 ,
expectedValue : numContainersPerPod * 2000 ,
} ,
}
tc . runTest ( t )
}
2016-11-10 16:14:18 +00:00
func TestReplicaCalcMissingMetricsUnreadyScaleDown ( t * testing . T ) {
tc := replicaCalcTestCase {
currentReplicas : 4 ,
expectedReplicas : 3 ,
2016-11-18 20:50:17 +00:00
podReadiness : [ ] v1 . ConditionStatus { v1 . ConditionFalse , v1 . ConditionTrue , v1 . ConditionTrue , v1 . ConditionTrue } ,
2016-11-10 16:14:18 +00:00
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
2016-11-10 16:14:18 +00:00
requests : [ ] resource . Quantity { resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) , resource . MustParse ( "1.0" ) } ,
levels : [ ] int64 { 100 , 100 , 100 } ,
targetUtilization : 50 ,
expectedUtilization : 10 ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * 100 ,
2016-11-10 16:14:18 +00:00
} ,
}
tc . runTest ( t )
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
// TestComputedToleranceAlgImplementation is a regression test which
// back-calculates a minimal percentage for downscaling based on a small percentage
// increase in pod utilization which is calibrated against the tolerance value.
func TestReplicaCalcComputedToleranceAlgImplementation ( t * testing . T ) {
startPods := int32 ( 10 )
// 150 mCPU per pod.
totalUsedCPUOfAllPods := int64 ( startPods * 150 )
// Each pod starts out asking for 2X what is really needed.
// This means we will have a 50% ratio of used/requested
totalRequestedCPUOfAllPods := int32 ( 2 * totalUsedCPUOfAllPods )
requestedToUsed := float64 ( totalRequestedCPUOfAllPods / int32 ( totalUsedCPUOfAllPods ) )
// Spread the amount we ask over 10 pods. We can add some jitter later in reportedLevels.
perPodRequested := totalRequestedCPUOfAllPods / startPods
// Force a minimal scaling event by satisfying (tolerance < 1 - resourcesUsedRatio).
2017-09-11 13:59:53 +00:00
target := math . Abs ( 1 / ( requestedToUsed * ( 1 - defaultTestingTolerance ) ) ) + .01
2017-09-09 18:53:34 +00:00
finalCPUPercentTarget := int32 ( target * 100 )
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
resourcesUsedRatio := float64 ( totalUsedCPUOfAllPods ) / float64 ( float64 ( totalRequestedCPUOfAllPods ) * target )
// i.e. .60 * 20 -> scaled down expectation.
finalPods := int32 ( math . Ceil ( resourcesUsedRatio * float64 ( startPods ) ) )
// To breach tolerance we will create a utilization ratio difference of tolerance to usageRatioToleranceValue)
tc := replicaCalcTestCase {
currentReplicas : startPods ,
expectedReplicas : finalPods ,
resource : & resourceInfo {
2016-11-18 20:50:17 +00:00
name : v1 . ResourceCPU ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
levels : [ ] int64 {
totalUsedCPUOfAllPods / 10 ,
totalUsedCPUOfAllPods / 10 ,
totalUsedCPUOfAllPods / 10 ,
totalUsedCPUOfAllPods / 10 ,
totalUsedCPUOfAllPods / 10 ,
totalUsedCPUOfAllPods / 10 ,
totalUsedCPUOfAllPods / 10 ,
totalUsedCPUOfAllPods / 10 ,
totalUsedCPUOfAllPods / 10 ,
totalUsedCPUOfAllPods / 10 ,
} ,
requests : [ ] resource . Quantity {
resource . MustParse ( fmt . Sprint ( perPodRequested + 100 ) + "m" ) ,
resource . MustParse ( fmt . Sprint ( perPodRequested - 100 ) + "m" ) ,
resource . MustParse ( fmt . Sprint ( perPodRequested + 10 ) + "m" ) ,
resource . MustParse ( fmt . Sprint ( perPodRequested - 10 ) + "m" ) ,
resource . MustParse ( fmt . Sprint ( perPodRequested + 2 ) + "m" ) ,
resource . MustParse ( fmt . Sprint ( perPodRequested - 2 ) + "m" ) ,
resource . MustParse ( fmt . Sprint ( perPodRequested + 1 ) + "m" ) ,
resource . MustParse ( fmt . Sprint ( perPodRequested - 1 ) + "m" ) ,
resource . MustParse ( fmt . Sprint ( perPodRequested ) + "m" ) ,
resource . MustParse ( fmt . Sprint ( perPodRequested ) + "m" ) ,
} ,
2017-09-09 18:53:34 +00:00
targetUtilization : finalCPUPercentTarget ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
expectedUtilization : int32 ( totalUsedCPUOfAllPods * 100 ) / totalRequestedCPUOfAllPods ,
2016-12-02 20:18:26 +00:00
expectedValue : numContainersPerPod * totalUsedCPUOfAllPods / 10 ,
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
} ,
}
tc . runTest ( t )
// Reuse the data structure above, now testing "unscaling".
// Now, we test that no scaling happens if we are in a very close margin to the tolerance
2017-09-11 13:59:53 +00:00
target = math . Abs ( 1 / ( requestedToUsed * ( 1 - defaultTestingTolerance ) ) ) + .004
2017-09-09 18:53:34 +00:00
finalCPUPercentTarget = int32 ( target * 100 )
tc . resource . targetUtilization = finalCPUPercentTarget
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
tc . currentReplicas = startPods
tc . expectedReplicas = startPods
tc . runTest ( t )
}
2018-08-08 13:00:17 +00:00
func TestGroupPods ( t * testing . T ) {
2018-07-18 12:21:00 +00:00
tests := [ ] struct {
2018-08-08 13:00:17 +00:00
name string
2018-09-04 18:16:48 +00:00
pods [ ] * v1 . Pod
2018-08-08 13:00:17 +00:00
metrics metricsclient . PodMetricsInfo
resource v1 . ResourceName
expectReadyPodCount int
2018-08-28 13:04:05 +00:00
expectIgnoredPods sets . String
2018-08-08 13:00:17 +00:00
expectMissingPods sets . String
2018-07-18 12:21:00 +00:00
} {
{
2018-08-08 13:00:17 +00:00
"void" ,
2018-09-04 18:16:48 +00:00
[ ] * v1 . Pod { } ,
2018-08-08 13:00:17 +00:00
metricsclient . PodMetricsInfo { } ,
2018-08-28 13:04:05 +00:00
v1 . ResourceCPU ,
2018-08-08 13:00:17 +00:00
0 ,
sets . NewString ( ) ,
sets . NewString ( ) ,
} ,
{
2018-08-28 13:04:05 +00:00
"count in a ready pod - memory" ,
2018-09-04 18:16:48 +00:00
[ ] * v1 . Pod {
2018-08-08 13:00:17 +00:00
{
ObjectMeta : metav1 . ObjectMeta {
Name : "bentham" ,
} ,
Status : v1 . PodStatus {
Phase : v1 . PodSucceeded ,
} ,
} ,
} ,
metricsclient . PodMetricsInfo {
2018-08-28 13:04:05 +00:00
"bentham" : metricsclient . PodMetric { Value : 1 , Timestamp : time . Now ( ) , Window : time . Minute } ,
2018-08-08 13:00:17 +00:00
} ,
2018-08-28 13:04:05 +00:00
v1 . ResourceMemory ,
2018-08-08 13:00:17 +00:00
1 ,
sets . NewString ( ) ,
sets . NewString ( ) ,
} ,
{
2018-08-28 13:04:05 +00:00
"ignore a pod without ready condition - CPU" ,
2018-09-04 18:16:48 +00:00
[ ] * v1 . Pod {
2018-07-18 12:21:00 +00:00
{
2018-08-08 13:00:17 +00:00
ObjectMeta : metav1 . ObjectMeta {
Name : "lucretius" ,
} ,
Status : v1 . PodStatus {
Phase : v1 . PodSucceeded ,
StartTime : & metav1 . Time {
Time : time . Now ( ) ,
} ,
2018-07-18 12:21:00 +00:00
} ,
} ,
} ,
2018-08-08 13:00:17 +00:00
metricsclient . PodMetricsInfo {
2018-08-28 13:04:05 +00:00
"lucretius" : metricsclient . PodMetric { Value : 1 } ,
2018-08-08 13:00:17 +00:00
} ,
v1 . ResourceCPU ,
0 ,
sets . NewString ( "lucretius" ) ,
sets . NewString ( ) ,
2018-07-18 12:21:00 +00:00
} ,
{
2018-08-28 13:04:05 +00:00
"count in a ready pod with fresh metrics during initialization period - CPU" ,
2018-09-04 18:16:48 +00:00
[ ] * v1 . Pod {
2018-07-18 12:21:00 +00:00
{
2018-08-08 13:00:17 +00:00
ObjectMeta : metav1 . ObjectMeta {
2018-08-28 13:04:05 +00:00
Name : "bentham" ,
} ,
Status : v1 . PodStatus {
Phase : v1 . PodSucceeded ,
StartTime : & metav1 . Time {
Time : time . Now ( ) . Add ( - 1 * time . Minute ) ,
} ,
Conditions : [ ] v1 . PodCondition {
{
Type : v1 . PodReady ,
LastTransitionTime : metav1 . Time { Time : time . Now ( ) . Add ( - 30 * time . Second ) } ,
Status : v1 . ConditionTrue ,
} ,
} ,
} ,
} ,
} ,
metricsclient . PodMetricsInfo {
"bentham" : metricsclient . PodMetric { Value : 1 , Timestamp : time . Now ( ) , Window : 30 * time . Second } ,
} ,
v1 . ResourceCPU ,
1 ,
sets . NewString ( ) ,
sets . NewString ( ) ,
} ,
{
"ignore a ready pod without fresh metrics during initialization period - CPU" ,
2018-09-04 18:16:48 +00:00
[ ] * v1 . Pod {
2018-08-28 13:04:05 +00:00
{
ObjectMeta : metav1 . ObjectMeta {
Name : "bentham" ,
} ,
Status : v1 . PodStatus {
Phase : v1 . PodSucceeded ,
StartTime : & metav1 . Time {
Time : time . Now ( ) . Add ( - 1 * time . Minute ) ,
} ,
Conditions : [ ] v1 . PodCondition {
{
Type : v1 . PodReady ,
LastTransitionTime : metav1 . Time { Time : time . Now ( ) . Add ( - 30 * time . Second ) } ,
Status : v1 . ConditionTrue ,
} ,
} ,
} ,
} ,
} ,
metricsclient . PodMetricsInfo {
"bentham" : metricsclient . PodMetric { Value : 1 , Timestamp : time . Now ( ) , Window : 60 * time . Second } ,
} ,
v1 . ResourceCPU ,
0 ,
sets . NewString ( "bentham" ) ,
sets . NewString ( ) ,
} ,
{
"ignore an unready pod during initialization period - CPU" ,
2018-09-04 18:16:48 +00:00
[ ] * v1 . Pod {
2018-08-28 13:04:05 +00:00
{
ObjectMeta : metav1 . ObjectMeta {
Name : "lucretius" ,
} ,
Status : v1 . PodStatus {
Phase : v1 . PodSucceeded ,
StartTime : & metav1 . Time {
Time : time . Now ( ) . Add ( - 10 * time . Minute ) ,
} ,
Conditions : [ ] v1 . PodCondition {
{
Type : v1 . PodReady ,
LastTransitionTime : metav1 . Time { Time : time . Now ( ) . Add ( - 9 * time . Minute - 54 * time . Second ) } ,
Status : v1 . ConditionFalse ,
} ,
} ,
} ,
} ,
} ,
metricsclient . PodMetricsInfo {
"lucretius" : metricsclient . PodMetric { Value : 1 } ,
} ,
v1 . ResourceCPU ,
0 ,
sets . NewString ( "lucretius" ) ,
sets . NewString ( ) ,
} ,
{
"count in a ready pod without fresh metrics after initialization period - CPU" ,
2018-09-04 18:16:48 +00:00
[ ] * v1 . Pod {
2018-08-28 13:04:05 +00:00
{
ObjectMeta : metav1 . ObjectMeta {
Name : "bentham" ,
2018-08-08 13:00:17 +00:00
} ,
Status : v1 . PodStatus {
Phase : v1 . PodSucceeded ,
StartTime : & metav1 . Time {
Time : time . Now ( ) . Add ( - 3 * time . Minute ) ,
} ,
Conditions : [ ] v1 . PodCondition {
{
Type : v1 . PodReady ,
LastTransitionTime : metav1 . Time { Time : time . Now ( ) . Add ( - 3 * time . Minute ) } ,
Status : v1 . ConditionTrue ,
} ,
} ,
2018-07-18 12:21:00 +00:00
} ,
} ,
} ,
2018-08-08 13:00:17 +00:00
metricsclient . PodMetricsInfo {
2018-08-28 13:04:05 +00:00
"bentham" : metricsclient . PodMetric { Value : 1 , Timestamp : time . Now ( ) . Add ( - 2 * time . Minute ) , Window : time . Minute } ,
} ,
v1 . ResourceCPU ,
1 ,
sets . NewString ( ) ,
sets . NewString ( ) ,
} ,
{
"count in an unready pod that was ready after initialization period - CPU" ,
2018-09-04 18:16:48 +00:00
[ ] * v1 . Pod {
2018-08-28 13:04:05 +00:00
{
ObjectMeta : metav1 . ObjectMeta {
Name : "lucretius" ,
} ,
Status : v1 . PodStatus {
Phase : v1 . PodSucceeded ,
StartTime : & metav1 . Time {
Time : time . Now ( ) . Add ( - 10 * time . Minute ) ,
} ,
Conditions : [ ] v1 . PodCondition {
{
Type : v1 . PodReady ,
LastTransitionTime : metav1 . Time { Time : time . Now ( ) . Add ( - 9 * time . Minute ) } ,
Status : v1 . ConditionFalse ,
} ,
} ,
} ,
} ,
} ,
metricsclient . PodMetricsInfo {
"lucretius" : metricsclient . PodMetric { Value : 1 } ,
} ,
v1 . ResourceCPU ,
1 ,
sets . NewString ( ) ,
sets . NewString ( ) ,
} ,
{
"ignore pod that has never been ready after initialization period - CPU" ,
2018-09-04 18:16:48 +00:00
[ ] * v1 . Pod {
2018-08-28 13:04:05 +00:00
{
ObjectMeta : metav1 . ObjectMeta {
Name : "lucretius" ,
} ,
Status : v1 . PodStatus {
Phase : v1 . PodSucceeded ,
StartTime : & metav1 . Time {
Time : time . Now ( ) . Add ( - 10 * time . Minute ) ,
} ,
Conditions : [ ] v1 . PodCondition {
{
Type : v1 . PodReady ,
LastTransitionTime : metav1 . Time { Time : time . Now ( ) . Add ( - 9 * time . Minute - 50 * time . Second ) } ,
Status : v1 . ConditionFalse ,
} ,
} ,
} ,
} ,
} ,
metricsclient . PodMetricsInfo {
"lucretius" : metricsclient . PodMetric { Value : 1 } ,
2018-08-08 13:00:17 +00:00
} ,
v1 . ResourceCPU ,
1 ,
sets . NewString ( ) ,
sets . NewString ( ) ,
2018-07-18 12:21:00 +00:00
} ,
{
2018-08-08 13:00:17 +00:00
"a missing pod" ,
2018-09-04 18:16:48 +00:00
[ ] * v1 . Pod {
2018-07-18 12:21:00 +00:00
{
2018-08-08 13:00:17 +00:00
ObjectMeta : metav1 . ObjectMeta {
Name : "epicurus" ,
} ,
Status : v1 . PodStatus {
Phase : v1 . PodSucceeded ,
StartTime : & metav1 . Time {
Time : time . Now ( ) . Add ( - 3 * time . Minute ) ,
} ,
2018-07-18 12:21:00 +00:00
} ,
} ,
} ,
2018-08-08 13:00:17 +00:00
metricsclient . PodMetricsInfo { } ,
v1 . ResourceCPU ,
0 ,
sets . NewString ( ) ,
sets . NewString ( "epicurus" ) ,
2018-07-18 12:21:00 +00:00
} ,
{
2018-08-28 13:04:05 +00:00
"several pods" ,
2018-09-04 18:16:48 +00:00
[ ] * v1 . Pod {
2018-08-08 13:00:17 +00:00
{
ObjectMeta : metav1 . ObjectMeta {
Name : "lucretius" ,
} ,
Status : v1 . PodStatus {
Phase : v1 . PodSucceeded ,
StartTime : & metav1 . Time {
Time : time . Now ( ) ,
} ,
} ,
} ,
{
ObjectMeta : metav1 . ObjectMeta {
Name : "niccolo" ,
} ,
Status : v1 . PodStatus {
Phase : v1 . PodSucceeded ,
StartTime : & metav1 . Time {
Time : time . Now ( ) . Add ( - 3 * time . Minute ) ,
} ,
Conditions : [ ] v1 . PodCondition {
{
Type : v1 . PodReady ,
LastTransitionTime : metav1 . Time { Time : time . Now ( ) . Add ( - 3 * time . Minute ) } ,
Status : v1 . ConditionTrue ,
} ,
} ,
} ,
} ,
{
ObjectMeta : metav1 . ObjectMeta {
Name : "epicurus" ,
} ,
Status : v1 . PodStatus {
Phase : v1 . PodSucceeded ,
StartTime : & metav1 . Time {
Time : time . Now ( ) . Add ( - 3 * time . Minute ) ,
} ,
} ,
} ,
} ,
metricsclient . PodMetricsInfo {
2018-08-28 13:04:05 +00:00
"lucretius" : metricsclient . PodMetric { Value : 1 } ,
"niccolo" : metricsclient . PodMetric { Value : 1 } ,
2018-08-08 13:00:17 +00:00
} ,
v1 . ResourceCPU ,
1 ,
sets . NewString ( "lucretius" ) ,
sets . NewString ( "epicurus" ) ,
2018-07-18 12:21:00 +00:00
} ,
}
for _ , tc := range tests {
2018-08-28 13:04:05 +00:00
readyPodCount , ignoredPods , missingPods := groupPods ( tc . pods , tc . metrics , tc . resource , defaultTestingCpuInitializationPeriod , defaultTestingDelayOfInitialReadinessStatus )
2018-08-08 13:00:17 +00:00
if readyPodCount != tc . expectReadyPodCount {
t . Errorf ( "%s got readyPodCount %d, expected %d" , tc . name , readyPodCount , tc . expectReadyPodCount )
}
2018-08-28 13:04:05 +00:00
if ! ignoredPods . Equal ( tc . expectIgnoredPods ) {
t . Errorf ( "%s got unreadyPods %v, expected %v" , tc . name , ignoredPods , tc . expectIgnoredPods )
2018-07-18 12:21:00 +00:00
}
2018-08-08 13:00:17 +00:00
if ! missingPods . Equal ( tc . expectMissingPods ) {
t . Errorf ( "%s got missingPods %v, expected %v" , tc . name , missingPods , tc . expectMissingPods )
2018-07-18 12:21:00 +00:00
}
}
}
HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage. However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.
This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down. If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale. Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.
The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.
Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up. As above, this cannot change the
direction of the scale.
This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions. Currently, this only works for CPU. For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-09-27 18:47:52 +00:00
// TODO: add more tests