Commit Graph

26 Commits (98b1a08d7e13f822f98c02ce8bf238f8a110d091)

Author SHA1 Message Date
Jeff Grafton a7f49c906d Use buildozer to delete licenses() rules except under third_party/ 2017-08-11 09:32:39 -07:00
Jeff Grafton 33276f06be Use buildozer to remove deprecated automanaged tags 2017-08-11 09:31:50 -07:00
Jacob Simpson 2c70e5df35 Manual changes. 2017-07-17 15:05:37 -07:00
Jacob Simpson 29c1b81d4c Scripted migration from clientset_generated to client-go. 2017-07-17 15:05:37 -07:00
Chao Xu 60604f8818 run hack/update-all 2017-06-22 11:31:03 -07:00
Chao Xu 36e2d0b4cb hack/update-bazal.sh
hack/update-godep-license
2017-05-15 13:51:39 -07:00
Chao Xu 14045d253d hack/update-bazel.sh 2017-05-11 15:59:04 -07:00
Chao Xu 958903509c bazel 2017-04-27 09:41:53 -07:00
Mike Danese a05c3c0efd autogenerated 2017-04-14 10:40:57 -07:00
Solly Ross d6fe1e8764 HPA Controller: Use Custom Metrics API
This commit switches over the HPA controller to use the custom metrics
API.  It also converts the HPA controller to use the generated client
in k8s.io/metrics for the resource metrics API.

In order to enable support, you must enable
`--horizontal-pod-autoscaler-use-rest-clients` on the
controller-manager, which will switch the HPA controller's MetricsClient
implementation over to use the standard rest clients for both custom
metrics and resource metrics.  This requires that at the least resource
metrics API is registered with kube-aggregator, and that the controller
manager is pointed at kube-aggregator.  For this to work, Heapster
must be serving the new-style API server (`--api-server=true`).
2017-03-01 10:21:50 -05:00
Solly Ross 7846827fc0 Convert HPA controller to use autoscaling/v2alpha1
This commit converts the HPA controller over to using the new version of
the HorizontalPodAutoscaler object found in autoscaling/v2alpha1.  Note
that while the autoscaler will accept requests for object metrics, the
scale client will return an error on attempts to get object metrics
(since that requires the new custom metrics API, which is not yet
implemented).

This also enables the HPA object in v2alpha1 as a retrievable API
version by default.
2017-02-16 15:03:14 -05:00
Dr. Stefan Schimanski 44ea6b3f30 Update generated files 2017-01-29 21:41:45 +01:00
deads2k 9488e2ba30 move testing/core to client-go 2017-01-26 13:54:40 -05:00
Clayton Coleman 469df12038
refactor: move ListOptions references to metav1 2017-01-23 17:52:46 -05:00
deads2k ee6752ef20 find and replace 2017-01-20 08:04:53 -05:00
deads2k c587b8a21e re-run client-gen 2017-01-20 08:02:36 -05:00
Clayton Coleman 9a2a50cda7
refactor: use metav1.ObjectMeta in other types 2017-01-17 16:17:19 -05:00
deads2k f1176d9c5c mechanical repercussions 2017-01-13 08:27:14 -05:00
deads2k 6a4d5cd7cc start the apimachinery repo 2017-01-11 09:09:48 -05:00
Jeff Grafton 20d221f75c Enable auto-generating sources rules 2017-01-05 14:14:13 -08:00
Mike Danese 161c391f44 autogenerated 2016-12-29 13:04:10 -08:00
Chao Xu 03d8820edc rename /release_1_5 to /clientset 2016-12-14 12:39:48 -08:00
Mike Danese c87de85347 autoupdate BUILD files 2016-12-12 13:30:07 -08:00
Chao Xu bcc783c594 run hack/update-all.sh 2016-11-23 15:53:09 -08:00
Solly Ross 2c66d47786 HPA: Consider unready pods and missing metrics
Currently, the HPA considers unready pods the same as ready pods when
looking at their CPU and custom metric usage.  However, pods frequently
use extra CPU during initialization, so we want to consider them
separately.

This commit causes the HPA to consider unready pods as having 0 CPU
usage when scaling up, and ignores them when scaling down.  If, when
scaling up, factoring the unready pods as having 0 CPU would cause a
downscale instead, we simply choose not to scale.  Otherwise, we simply
scale up at the reduced amount caculated by factoring the pods in at
zero CPU usage.

The effect is that unready pods cause the autoscaler to be a bit more
conservative -- large increases in CPU usage can still cause scales,
even with unready pods in the mix, but will not cause the scale factors
to be as large, in anticipation of the new pods later becoming ready and
handling load.

Similarly, if there are pods for which no metrics have been retrieved,
these pods are treated as having 100% of the requested metric when
scaling down, and 0% when scaling up.  As above, this cannot change the
direction of the scale.

This commit also changes the HPA to ignore superfluous metrics -- as
long as metrics for all ready pods are present, the HPA we make scaling
decisions.  Currently, this only works for CPU.  For custom metrics, we
cannot identify which metrics go to which pods if we get superfluous
metrics, so we abort the scale.
2016-11-08 00:59:23 -05:00
Mike Danese 3b6a067afc autogenerated 2016-10-21 17:32:32 -07:00