Commit Graph

1178 Commits (ff998ab5667b19a07ccc83778b8160580e137f62)

Author SHA1 Message Date
Piotr Szczesniak 212b459817 Move internal types of hpa from pkg/apis/extensions to pkg/apis/autoscaling 2016-05-09 09:18:13 +02:00
k8s-merge-robot 3ee833ca3b Merge pull request #25006 from liggitt/third-party-root-scope
Automatic merge from submit-queue

Make ThirdPartyResource a root scoped object

ThirdPartyResource (the registration of a third party type) belongs at the cluster scope. It results in resource handlers installed in every namespace, and the same name in two namespaces collides (namespace is ignored when determining group/kind).

ThirdPartyResourceData (an actual instance of that type) is still namespace-scoped.

This PR moves ThirdPartyResource to be a root scope object. Someone previously using ThirdPartyResource definitions in alpha should be able to move them from namespace to root scope like this:

setup (run on 1.2):
```
kubectl create ns ns1

echo '{"kind":"ThirdPartyResource","apiVersion":"extensions/v1beta1","metadata":{"name":"foo.example.com"},"versions":[{"name":"v8"}]}' | kubectl create -f - --namespace=ns1

echo '{"kind":"Foo","apiVersion":"example.com/v8","metadata":{"name":"MyFoo"},"testkey":"testvalue"}' | kubectl create -f - --namespace=ns1
```

export:
```
kubectl get thirdpartyresource --all-namespaces -o yaml > tprs.yaml
```

remove namespaced kind registrations (this shouldn't remove the data of that type, which is another possible issue):
```
kubectl delete -f tprs.yaml
```

... upgrade ...

re-register the custom types at the root scope:
```
kubectl create -f tprs.yaml
```

Additionally, pre-1.3 clients that expect to read/write ThirdPartyResource at a namespace scope will not be compatible with 1.3+ servers, and 1.3+ clients that expect to read/write ThirdPartyResource at a root scope will not be compatible with pre-1.3 servers.
2016-05-06 20:50:35 -07:00
Clayton Coleman e0ebcf4216
Split the storage and negotiation parts of Codecs
The codec factory should support two distinct interfaces - negotiating
for a serializer with a client, vs reading or writing data to a storage
form (etcd, disk, etc). Make the EncodeForVersion and DecodeToVersion
methods only take Encoder and Decoder, and slight refactoring elsewhere.

In the storage factory, use a content type to control what serializer to
pick, and use the universal deserializer. This ensures that storage can
read JSON (which might be from older objects) while only writing
protobuf. Add exceptions for those resources that may not be able to
write to protobuf (specifically third party resources, but potentially
others in the future).
2016-05-05 12:08:23 -04:00
k8s-merge-robot f5e1e9a227 Merge pull request #24912 from bprashanth/petset_controller
Automatic merge from submit-queue

Petset controller

Took longer than I expected. Main parts of this pr are:
1. Identity generation based on petset spec (volumes are mapped per discussion in #18016)
2. Ensure that we create/delete pets in sequence
3. Ensuring that we create, wait for healthy, create; or delete, wait for terminationGrace, delete
4. Controller that watches apiserver and drives actual -> desired

PVCs are not deleted, yet.
2016-05-05 08:58:23 -07:00
k8s-merge-robot 3faf214506 Merge pull request #24924 from mqliang/pv-prepare-update
Automatic merge from submit-queue

fix PrepareForUpdate bug for PV and PVC
2016-05-05 01:46:21 -07:00
Prashanth Balasubramanian 6bc3052551 PetSet alpha controller 2016-05-04 18:39:17 -07:00
mqliang 0109c08b9b fix PrepareForUpdate bug for HPA 2016-05-05 09:39:03 +08:00
Jordan Liggitt e41d504739 Move ThirdPartyResource to root scoped object 2016-04-30 01:06:07 -04:00
Clayton Coleman fdb110c859
Fix the rest of the code 2016-04-29 17:12:10 -04:00
Jordan Liggitt 1e5815872e Validate deletion timestamp doesn't change on update 2016-04-28 11:50:48 -04:00
mqliang 3bcd5b1648 fix PrepareForUpdate bug for PV and PVC 2016-04-28 19:13:02 +08:00
k8s-merge-robot d0b887e4e0 Merge pull request #24595 from zhouhaibing089/httpserverclose
Automatic merge from submit-queue

Uncomment the code that caused by #19254

Fix https://github.com/kubernetes/kubernetes/issues/24546.

@lavalamp
2016-04-28 01:41:16 -07:00
k8s-merge-robot 28bc4b32c2 Merge pull request #24532 from rsc/master
Automatic merge from submit-queue

apiserver latency reductions

Combined effect of these two commits on the latency observed by the 1000-node kubemark benchmark:

```
name                               old ms/op  new ms/op   delta
LIST_nodes_p50                      127 ±16%    121 ± 9%   -4.58%  (p=0.000 n=29+27)
LIST_nodes_p90                      326 ±12%    266 ±12%  -18.48%  (p=0.000 n=29+27)
LIST_nodes_p99                      453 ±11%    400 ±14%  -11.79%  (p=0.000 n=29+28)
LIST_replicationcontrollers_p50    29.4 ±49%   26.2 ±54%     ~     (p=0.085 n=30+29)
LIST_replicationcontrollers_p90    83.0 ±78%   68.6 ±59%  -17.33%  (p=0.013 n=30+28)
LIST_replicationcontrollers_p99     216 ±43%    177 ±49%  -17.68%  (p=0.000 n=29+29)
DELETE_pods_p50                    24.5 ±14%   24.3 ±13%     ~     (p=0.562 n=30+29)
DELETE_pods_p90                    30.7 ± 1%   30.7 ± 1%   -0.30%  (p=0.011 n=29+29)
DELETE_pods_p99                    77.2 ±34%   54.2 ±23%  -29.76%  (p=0.000 n=30+27)
PUT_replicationcontrollers_p50     5.86 ±26%   5.94 ±32%     ~     (p=0.734 n=29+29)
PUT_replicationcontrollers_p90     15.8 ± 7%   15.5 ± 6%   -2.06%  (p=0.010 n=29+29)
PUT_replicationcontrollers_p99     57.8 ±35%   39.5 ±55%  -31.60%  (p=0.000 n=29+29)
PUT_nodes_p50                      14.9 ± 2%   14.8 ± 2%   -0.68%  (p=0.012 n=30+27)
PUT_nodes_p90                      16.5 ± 1%   16.3 ± 2%   -0.90%  (p=0.000 n=27+28)
PUT_nodes_p99                      57.9 ±47%   41.3 ±35%  -28.61%  (p=0.000 n=30+28)
POST_replicationcontrollers_p50    6.35 ±29%   6.34 ±20%     ~     (p=0.944 n=30+28)
POST_replicationcontrollers_p90    15.4 ± 5%   15.0 ± 5%   -2.18%  (p=0.001 n=29+29)
POST_replicationcontrollers_p99    52.2 ±71%   32.9 ±46%  -36.99%  (p=0.000 n=29+27)
POST_pods_p50                      8.99 ±13%   8.95 ±16%     ~     (p=0.903 n=30+29)
POST_pods_p90                      16.2 ± 4%   16.1 ± 4%     ~     (p=0.287 n=29+29)
POST_pods_p99                      30.9 ±21%   26.4 ±12%  -14.73%  (p=0.000 n=28+28)
POST_bindings_p50                  9.34 ±12%   8.92 ±15%   -4.54%  (p=0.013 n=30+28)
POST_bindings_p90                  16.6 ± 1%   16.5 ± 3%   -0.73%  (p=0.017 n=28+29)
POST_bindings_p99                  23.5 ± 9%   21.1 ± 4%  -10.09%  (p=0.000 n=27+28)
PUT_pods_p50                       10.8 ±11%   10.2 ± 5%   -5.47%  (p=0.000 n=30+27)
PUT_pods_p90                       16.1 ± 1%   16.0 ± 1%   -0.64%  (p=0.000 n=29+28)
PUT_pods_p99                       23.4 ± 9%   20.9 ± 9%  -10.93%  (p=0.000 n=28+27)
DELETE_replicationcontrollers_p50  2.42 ±16%   2.50 ±13%     ~     (p=0.054 n=29+28)
DELETE_replicationcontrollers_p90  11.5 ±12%   11.8 ±13%     ~     (p=0.141 n=30+28)
DELETE_replicationcontrollers_p99  19.5 ±21%   19.1 ±21%     ~     (p=0.397 n=29+29)
GET_nodes_p50                      0.77 ±10%   0.76 ±10%     ~     (p=0.317 n=28+28)
GET_nodes_p90                      1.20 ±16%   1.14 ±24%   -4.66%  (p=0.036 n=28+29)
GET_nodes_p99                      11.4 ±48%    7.5 ±46%  -34.28%  (p=0.000 n=28+29)
GET_replicationcontrollers_p50     0.74 ±17%   0.73 ±17%     ~     (p=0.222 n=30+28)
GET_replicationcontrollers_p90     1.04 ±25%   1.01 ±27%     ~     (p=0.231 n=30+29)
GET_replicationcontrollers_p99     12.1 ±81%  10.0 ±145%     ~     (p=0.063 n=28+29)
GET_pods_p50                       0.78 ±12%   0.77 ±10%     ~     (p=0.178 n=30+28)
GET_pods_p90                       1.06 ±19%   1.02 ±19%     ~     (p=0.120 n=29+28)
GET_pods_p99                       3.92 ±43%   2.45 ±38%  -37.55%  (p=0.000 n=27+25)
LIST_services_p50                  0.20 ±13%   0.20 ±16%     ~     (p=0.854 n=28+29)
LIST_services_p90                  0.28 ±15%   0.27 ±14%     ~     (p=0.219 n=29+28)
LIST_services_p99                  0.49 ±20%   0.47 ±24%     ~     (p=0.140 n=29+29)
LIST_endpoints_p50                 0.19 ±14%   0.19 ±15%     ~     (p=0.709 n=29+29)
LIST_endpoints_p90                 0.26 ±16%   0.26 ±13%     ~     (p=0.274 n=29+28)
LIST_endpoints_p99                 0.46 ±24%   0.44 ±21%     ~     (p=0.111 n=29+29)
LIST_horizontalpodautoscalers_p50  0.16 ±15%   0.15 ±13%     ~     (p=0.253 n=30+27)
LIST_horizontalpodautoscalers_p90  0.22 ±24%   0.21 ±16%     ~     (p=0.152 n=30+28)
LIST_horizontalpodautoscalers_p99  0.31 ±33%   0.31 ±38%     ~     (p=0.817 n=28+29)
LIST_daemonsets_p50                0.16 ±20%   0.15 ±11%     ~     (p=0.135 n=30+27)
LIST_daemonsets_p90                0.22 ±18%   0.21 ±25%     ~     (p=0.135 n=29+28)
LIST_daemonsets_p99                0.29 ±28%   0.29 ±32%     ~     (p=0.606 n=28+28)
LIST_jobs_p50                      0.16 ±16%   0.15 ±12%     ~     (p=0.375 n=29+28)
LIST_jobs_p90                      0.22 ±18%   0.21 ±16%     ~     (p=0.090 n=29+26)
LIST_jobs_p99                      0.31 ±28%   0.28 ±35%  -10.29%  (p=0.005 n=29+27)
LIST_deployments_p50               0.15 ±16%   0.15 ±13%     ~     (p=0.565 n=29+28)
LIST_deployments_p90               0.22 ±22%   0.21 ±19%     ~     (p=0.107 n=30+28)
LIST_deployments_p99               0.31 ±27%   0.29 ±34%     ~     (p=0.068 n=29+28)
LIST_namespaces_p50                0.21 ±25%   0.21 ±26%     ~     (p=0.768 n=29+27)
LIST_namespaces_p90                0.28 ±29%   0.26 ±25%     ~     (p=0.101 n=30+28)
LIST_namespaces_p99                0.30 ±48%   0.29 ±42%     ~     (p=0.339 n=30+29)
LIST_replicasets_p50               0.15 ±18%   0.15 ±16%     ~     (p=0.612 n=30+28)
LIST_replicasets_p90               0.22 ±19%   0.21 ±18%   -5.13%  (p=0.011 n=28+27)
LIST_replicasets_p99               0.31 ±39%   0.28 ±29%     ~     (p=0.066 n=29+28)
LIST_persistentvolumes_p50         0.16 ±23%   0.15 ±21%     ~     (p=0.124 n=30+29)
LIST_persistentvolumes_p90         0.21 ±23%   0.20 ±23%     ~     (p=0.092 n=30+25)
LIST_persistentvolumes_p99         0.21 ±24%   0.20 ±23%     ~     (p=0.053 n=30+25)
LIST_resourcequotas_p50            0.16 ±12%   0.16 ±13%     ~     (p=0.175 n=27+28)
LIST_resourcequotas_p90            0.20 ±22%   0.20 ±24%     ~     (p=0.388 n=30+28)
LIST_resourcequotas_p99            0.22 ±24%   0.22 ±23%     ~     (p=0.575 n=30+28)
LIST_persistentvolumeclaims_p50    0.15 ±21%   0.15 ±29%     ~     (p=0.079 n=30+28)
LIST_persistentvolumeclaims_p90    0.19 ±26%   0.18 ±34%     ~     (p=0.446 n=29+29)
LIST_persistentvolumeclaims_p99    0.19 ±26%   0.18 ±34%     ~     (p=0.446 n=29+29)
LIST_pods_p50                      68.0 ±16%   56.3 ± 9%  -17.19%  (p=0.000 n=29+28)
LIST_pods_p90                       119 ±19%     93 ± 8%  -21.88%  (p=0.000 n=28+28)
LIST_pods_p99                       230 ±18%    202 ±14%  -12.13%  (p=0.000 n=27+28)
```
2016-04-27 08:32:18 -07:00
Timothy St. Clair 24b4286960 In preparation for new storage backends renaming generic registry store 2016-04-26 08:32:13 -05:00
k8s-merge-robot 293b0d0815 Merge pull request #23493 from soltysh/move_job_internals
Automatic merge from submit-queue

Move internal types of job from pkg/apis/extensions to pkg/apis/batch

This addressed the job part of #23216, this is still WIP. Will notify once finished. I'd like to have it in before starting working on ScheduledJob. 

@lavalamp @erictune fyi
2016-04-25 20:58:49 -07:00
zhouhaibing089 bf1a3f99c0 Uncomment the code that cause by #19254 2016-04-25 23:21:31 +08:00
Maciej Szulik a3b4447305 Move internal types of job from pkg/apis/extensions to pkg/apis/batch 2016-04-25 11:03:54 +02:00
Clayton Coleman 3111985564 Handle streaming serializers more consistently
Add tests to watch behavior in both protocols (http and websocket)
against all 3 media types. Adopt the
`application/vnd.kubernetes.protobuf;stream=watch` media type for the
content that comes back from a watch call so that it can be
distinguished from a Status result.
2016-04-22 11:07:24 -04:00
Russ Cox 58629a28e4 pkg/registry/pod: avoid allocation in common pod search
PodToSelectableFields creates a map of field attributes
for a particular pod filter query to use. If the result
of the query does not depend on the fields at all, avoid
creating the map.

This is the source of about half the allocated memory
(by byte volume) during the kubemark benchmark, and it
is in turn the main driver of CPU usage during the benchmark,
because of the many background pod watches going on,
as well as the occasional list pods.

These benchmarks for 1000-node kubemark show the difference
from my previous CL (caching timers) to this CL:

name                               old ms/op   new ms/op   delta
LIST_nodes_p50                       124 ±13%    121 ± 9%     ~     (p=0.136 n=29+27)
LIST_nodes_p90                       278 ±15%    266 ±12%   -4.26%  (p=0.031 n=29+27)
LIST_nodes_p99                       405 ±19%    400 ±14%     ~     (p=0.864 n=28+28)
LIST_pods_p50                       65.3 ±13%   56.3 ± 9%  -13.75%  (p=0.000 n=29+28)
LIST_pods_p90                        115 ±12%     93 ± 8%  -18.75%  (p=0.000 n=27+28)
LIST_pods_p99                        226 ±21%    202 ±14%  -10.52%  (p=0.000 n=28+28)
LIST_replicationcontrollers_p50     26.6 ±43%   26.2 ±54%     ~     (p=0.487 n=29+29)
LIST_replicationcontrollers_p90     68.7 ±63%   68.6 ±59%     ~     (p=0.931 n=29+28)
LIST_replicationcontrollers_p99      173 ±41%    177 ±49%     ~     (p=0.618 n=28+29)
PUT_replicationcontrollers_p50      5.83 ±36%   5.94 ±32%     ~     (p=0.818 n=28+29)
PUT_replicationcontrollers_p90      15.9 ± 6%   15.5 ± 6%   -2.23%  (p=0.019 n=28+29)
PUT_replicationcontrollers_p99      56.7 ±41%   39.5 ±55%  -30.29%  (p=0.000 n=28+29)
DELETE_pods_p50                     24.3 ±17%   24.3 ±13%     ~     (p=0.855 n=28+29)
DELETE_pods_p90                     30.6 ± 0%   30.7 ± 1%     ~     (p=0.140 n=28+29)
DELETE_pods_p99                     56.3 ±27%   54.2 ±23%     ~     (p=0.188 n=28+27)
PUT_nodes_p50                       14.9 ± 1%   14.8 ± 2%     ~     (p=0.781 n=28+27)
PUT_nodes_p90                       16.4 ± 2%   16.3 ± 2%     ~     (p=0.321 n=28+28)
PUT_nodes_p99                       44.6 ±42%   41.3 ±35%     ~     (p=0.361 n=29+28)
POST_replicationcontrollers_p50     6.33 ±23%   6.34 ±20%     ~     (p=0.993 n=28+28)
POST_replicationcontrollers_p90     15.2 ± 6%   15.0 ± 5%     ~     (p=0.106 n=28+29)
POST_replicationcontrollers_p99     53.4 ±52%   32.9 ±46%  -38.41%  (p=0.000 n=27+27)
POST_pods_p50                       9.33 ±13%   8.95 ±16%     ~     (p=0.069 n=29+29)
POST_pods_p90                       16.3 ± 4%   16.1 ± 4%   -1.43%  (p=0.044 n=29+29)
POST_pods_p99                       28.4 ±23%   26.4 ±12%   -7.05%  (p=0.004 n=29+28)
DELETE_replicationcontrollers_p50   2.50 ±13%   2.50 ±13%     ~     (p=0.649 n=29+28)
DELETE_replicationcontrollers_p90   11.7 ±10%   11.8 ±13%     ~     (p=0.863 n=28+28)
DELETE_replicationcontrollers_p99   19.0 ±22%   19.1 ±21%     ~     (p=0.818 n=28+29)
PUT_pods_p50                        10.3 ± 5%   10.2 ± 5%     ~     (p=0.235 n=28+27)
PUT_pods_p90                        16.0 ± 1%   16.0 ± 1%     ~     (p=0.380 n=29+28)
PUT_pods_p99                        21.6 ±14%   20.9 ± 9%   -3.15%  (p=0.010 n=28+27)
POST_bindings_p50                   8.98 ±17%   8.92 ±15%     ~     (p=0.666 n=29+28)
POST_bindings_p90                   16.5 ± 2%   16.5 ± 3%     ~     (p=0.840 n=26+29)
POST_bindings_p99                   21.4 ± 5%   21.1 ± 4%   -1.21%  (p=0.049 n=27+28)
GET_nodes_p90                       1.18 ±19%   1.14 ±24%     ~     (p=0.137 n=29+29)
GET_nodes_p99                       8.29 ±40%   7.50 ±46%     ~     (p=0.106 n=28+29)
GET_replicationcontrollers_p90      1.03 ±21%   1.01 ±27%     ~     (p=0.489 n=29+29)
GET_replicationcontrollers_p99     10.0 ±123%  10.0 ±145%     ~     (p=0.794 n=28+29)
GET_pods_p90                        1.08 ±21%   1.02 ±19%     ~     (p=0.083 n=29+28)
GET_pods_p99                        2.81 ±39%   2.45 ±38%  -12.78%  (p=0.021 n=28+25)

Overall the two CLs combined have this effect:

name                               old ms/op  new ms/op   delta
LIST_nodes_p50                      127 ±16%    121 ± 9%   -4.58%  (p=0.000 n=29+27)
LIST_nodes_p90                      326 ±12%    266 ±12%  -18.48%  (p=0.000 n=29+27)
LIST_nodes_p99                      453 ±11%    400 ±14%  -11.79%  (p=0.000 n=29+28)
LIST_replicationcontrollers_p50    29.4 ±49%   26.2 ±54%     ~     (p=0.085 n=30+29)
LIST_replicationcontrollers_p90    83.0 ±78%   68.6 ±59%  -17.33%  (p=0.013 n=30+28)
LIST_replicationcontrollers_p99     216 ±43%    177 ±49%  -17.68%  (p=0.000 n=29+29)
DELETE_pods_p50                    24.5 ±14%   24.3 ±13%     ~     (p=0.562 n=30+29)
DELETE_pods_p90                    30.7 ± 1%   30.7 ± 1%   -0.30%  (p=0.011 n=29+29)
DELETE_pods_p99                    77.2 ±34%   54.2 ±23%  -29.76%  (p=0.000 n=30+27)
PUT_replicationcontrollers_p50     5.86 ±26%   5.94 ±32%     ~     (p=0.734 n=29+29)
PUT_replicationcontrollers_p90     15.8 ± 7%   15.5 ± 6%   -2.06%  (p=0.010 n=29+29)
PUT_replicationcontrollers_p99     57.8 ±35%   39.5 ±55%  -31.60%  (p=0.000 n=29+29)
PUT_nodes_p50                      14.9 ± 2%   14.8 ± 2%   -0.68%  (p=0.012 n=30+27)
PUT_nodes_p90                      16.5 ± 1%   16.3 ± 2%   -0.90%  (p=0.000 n=27+28)
PUT_nodes_p99                      57.9 ±47%   41.3 ±35%  -28.61%  (p=0.000 n=30+28)
POST_replicationcontrollers_p50    6.35 ±29%   6.34 ±20%     ~     (p=0.944 n=30+28)
POST_replicationcontrollers_p90    15.4 ± 5%   15.0 ± 5%   -2.18%  (p=0.001 n=29+29)
POST_replicationcontrollers_p99    52.2 ±71%   32.9 ±46%  -36.99%  (p=0.000 n=29+27)
POST_pods_p50                      8.99 ±13%   8.95 ±16%     ~     (p=0.903 n=30+29)
POST_pods_p90                      16.2 ± 4%   16.1 ± 4%     ~     (p=0.287 n=29+29)
POST_pods_p99                      30.9 ±21%   26.4 ±12%  -14.73%  (p=0.000 n=28+28)
POST_bindings_p50                  9.34 ±12%   8.92 ±15%   -4.54%  (p=0.013 n=30+28)
POST_bindings_p90                  16.6 ± 1%   16.5 ± 3%   -0.73%  (p=0.017 n=28+29)
POST_bindings_p99                  23.5 ± 9%   21.1 ± 4%  -10.09%  (p=0.000 n=27+28)
PUT_pods_p50                       10.8 ±11%   10.2 ± 5%   -5.47%  (p=0.000 n=30+27)
PUT_pods_p90                       16.1 ± 1%   16.0 ± 1%   -0.64%  (p=0.000 n=29+28)
PUT_pods_p99                       23.4 ± 9%   20.9 ± 9%  -10.93%  (p=0.000 n=28+27)
DELETE_replicationcontrollers_p50  2.42 ±16%   2.50 ±13%     ~     (p=0.054 n=29+28)
DELETE_replicationcontrollers_p90  11.5 ±12%   11.8 ±13%     ~     (p=0.141 n=30+28)
DELETE_replicationcontrollers_p99  19.5 ±21%   19.1 ±21%     ~     (p=0.397 n=29+29)
GET_nodes_p50                      0.77 ±10%   0.76 ±10%     ~     (p=0.317 n=28+28)
GET_nodes_p90                      1.20 ±16%   1.14 ±24%   -4.66%  (p=0.036 n=28+29)
GET_nodes_p99                      11.4 ±48%    7.5 ±46%  -34.28%  (p=0.000 n=28+29)
GET_replicationcontrollers_p50     0.74 ±17%   0.73 ±17%     ~     (p=0.222 n=30+28)
GET_replicationcontrollers_p90     1.04 ±25%   1.01 ±27%     ~     (p=0.231 n=30+29)
GET_replicationcontrollers_p99     12.1 ±81%  10.0 ±145%     ~     (p=0.063 n=28+29)
GET_pods_p50                       0.78 ±12%   0.77 ±10%     ~     (p=0.178 n=30+28)
GET_pods_p90                       1.06 ±19%   1.02 ±19%     ~     (p=0.120 n=29+28)
GET_pods_p99                       3.92 ±43%   2.45 ±38%  -37.55%  (p=0.000 n=27+25)
LIST_services_p50                  0.20 ±13%   0.20 ±16%     ~     (p=0.854 n=28+29)
LIST_services_p90                  0.28 ±15%   0.27 ±14%     ~     (p=0.219 n=29+28)
LIST_services_p99                  0.49 ±20%   0.47 ±24%     ~     (p=0.140 n=29+29)
LIST_endpoints_p50                 0.19 ±14%   0.19 ±15%     ~     (p=0.709 n=29+29)
LIST_endpoints_p90                 0.26 ±16%   0.26 ±13%     ~     (p=0.274 n=29+28)
LIST_endpoints_p99                 0.46 ±24%   0.44 ±21%     ~     (p=0.111 n=29+29)
LIST_horizontalpodautoscalers_p50  0.16 ±15%   0.15 ±13%     ~     (p=0.253 n=30+27)
LIST_horizontalpodautoscalers_p90  0.22 ±24%   0.21 ±16%     ~     (p=0.152 n=30+28)
LIST_horizontalpodautoscalers_p99  0.31 ±33%   0.31 ±38%     ~     (p=0.817 n=28+29)
LIST_daemonsets_p50                0.16 ±20%   0.15 ±11%     ~     (p=0.135 n=30+27)
LIST_daemonsets_p90                0.22 ±18%   0.21 ±25%     ~     (p=0.135 n=29+28)
LIST_daemonsets_p99                0.29 ±28%   0.29 ±32%     ~     (p=0.606 n=28+28)
LIST_jobs_p50                      0.16 ±16%   0.15 ±12%     ~     (p=0.375 n=29+28)
LIST_jobs_p90                      0.22 ±18%   0.21 ±16%     ~     (p=0.090 n=29+26)
LIST_jobs_p99                      0.31 ±28%   0.28 ±35%  -10.29%  (p=0.005 n=29+27)
LIST_deployments_p50               0.15 ±16%   0.15 ±13%     ~     (p=0.565 n=29+28)
LIST_deployments_p90               0.22 ±22%   0.21 ±19%     ~     (p=0.107 n=30+28)
LIST_deployments_p99               0.31 ±27%   0.29 ±34%     ~     (p=0.068 n=29+28)
LIST_namespaces_p50                0.21 ±25%   0.21 ±26%     ~     (p=0.768 n=29+27)
LIST_namespaces_p90                0.28 ±29%   0.26 ±25%     ~     (p=0.101 n=30+28)
LIST_namespaces_p99                0.30 ±48%   0.29 ±42%     ~     (p=0.339 n=30+29)
LIST_replicasets_p50               0.15 ±18%   0.15 ±16%     ~     (p=0.612 n=30+28)
LIST_replicasets_p90               0.22 ±19%   0.21 ±18%   -5.13%  (p=0.011 n=28+27)
LIST_replicasets_p99               0.31 ±39%   0.28 ±29%     ~     (p=0.066 n=29+28)
LIST_persistentvolumes_p50         0.16 ±23%   0.15 ±21%     ~     (p=0.124 n=30+29)
LIST_persistentvolumes_p90         0.21 ±23%   0.20 ±23%     ~     (p=0.092 n=30+25)
LIST_persistentvolumes_p99         0.21 ±24%   0.20 ±23%     ~     (p=0.053 n=30+25)
LIST_resourcequotas_p50            0.16 ±12%   0.16 ±13%     ~     (p=0.175 n=27+28)
LIST_resourcequotas_p90            0.20 ±22%   0.20 ±24%     ~     (p=0.388 n=30+28)
LIST_resourcequotas_p99            0.22 ±24%   0.22 ±23%     ~     (p=0.575 n=30+28)
LIST_persistentvolumeclaims_p50    0.15 ±21%   0.15 ±29%     ~     (p=0.079 n=30+28)
LIST_persistentvolumeclaims_p90    0.19 ±26%   0.18 ±34%     ~     (p=0.446 n=29+29)
LIST_persistentvolumeclaims_p99    0.19 ±26%   0.18 ±34%     ~     (p=0.446 n=29+29)
LIST_pods_p50                      68.0 ±16%   56.3 ± 9%  -17.19%  (p=0.000 n=29+28)
LIST_pods_p90                       119 ±19%     93 ± 8%  -21.88%  (p=0.000 n=28+28)
LIST_pods_p99                       230 ±18%    202 ±14%  -12.13%  (p=0.000 n=27+28)
2016-04-21 15:53:47 -04:00
Prashanth Balasubramanian 0ac10c6cc2 PetSet type, apps apigroup 2016-04-20 18:49:31 -07:00
Clayton Coleman a5ff573263 ThirdPartyResourceCodec should implement streaming.Framer
Wrappers must proxy NewFrameReader|Writer for now (until we potentially
refactor the codec factory to separate them).
2016-04-18 21:24:26 -04:00
k8s-merge-robot 2bf52175f9 Merge pull request #23923 from hongchaodeng/exp
Automatic merge from submit-queue

Decouple etcd node.expiration logic from DeleitonTimestamp

ref: https://github.com/kubernetes/kubernetes/issues/23902
2016-04-17 04:12:26 -07:00
k8s-merge-robot a275a045d1 Merge pull request #23914 from sky-uk/make-etcd-cache-size-configurable
Automatic merge from submit-queue

Make etcd cache size configurable

Instead of the prior 50K limit, allow users to specify a more sensible size for their cluster.

I'm not sure what a sensible default is here. I'm still experimenting on my own clusters. 50 gives me a 270MB max footprint. 50K caused my apiserver to run out of memory as it exceeded >2GB. I believe that number is far too large for most people's use cases.

There are some other fundamental issues that I'm not addressing here:
- Old etcd items are cached and potentially never removed (it stores using modifiedIndex, and doesn't remove the old object when it gets updated)
- Cache isn't LRU, so there's no guarantee the cache remains hot. This makes its performance difficult to predict. More of an issue with a smaller cache size.
- 1.2 etcd entries seem to have a larger memory footprint (I never had an issue in 1.1, even though this cache existed there). I suspect that's due to image lists on the node status.

This is provided as a fix for #23323
2016-04-17 00:06:31 -07:00
Hongchao Deng b9745999c9 Decouple etcd node.expiration logic from DeleitonTimestamp 2016-04-13 15:11:53 -07:00
Daniel Smith 4c539bf082 Merge pull request #23490 from wojtek-t/remove_set_from_storage_interface
Remove Set() from storage.Interface.
2016-04-13 14:22:05 -07:00
k8s-merge-robot f5e8e7453b Merge pull request #23806 from smarterclayton/streaming_watch
Automatic merge from submit-queue

Implement a streaming serializer for watch

Changeover watch to use streaming serialization. Properly version the
watch objects. Implement simple framing for JSON and Protobuf (but not
YAML).

@wojtek-t @lavalamp
2016-04-13 05:18:17 -07:00
k8s-merge-robot acf9492cb1 Merge pull request #23660 from goltermann/vetclean
Automatic merge from submit-queue

Additional go vet fixes

Mostly:
- pass lock by value
- bad syntax for struct tag value
- example functions not formatted properly
2016-04-12 06:22:16 -07:00
James Ravn 5bb0595260 Make deserialization cache size configurable
Instead of the default 50K entries, allow users to specify more sensible
sizes for their cluster.
2016-04-12 13:42:27 +01:00
Clayton Coleman 3474911736 Implement a streaming serializer for watch
Changeover watch to use streaming serialization. Properly version the
watch objects. Implement simple framing for JSON and Protobuf (but not
YAML).
2016-04-11 11:22:05 -04:00
goltermann 696423e044 Vet fixes, mostly pass lock by value errors. 2016-04-06 11:27:40 -07:00
Wojciech Tyczynski 53f433f019 Remove Set() from storage.Interface. 2016-04-04 17:54:18 +02:00
k8s-merge-robot f5c93c8ddc Merge pull request #23472 from wojtek-t/fix_object_meta_for
Automatic merge from submit-queue

Switch from api.ObjectMetaFor to meta.Accessor in most of places

Fix #23278

@smarterclayton @lavalamp
2016-04-02 02:33:40 -07:00
Brendan Burns be6c5b332b Add third party support to kubectl 2016-03-31 10:53:32 -07:00
Wojciech Tyczynski 2699be2e7e Switch api.ObjetaMetaFor to meta.Accessor 2016-03-31 17:52:31 +02:00
Tommy Murphy 4d22c2fd6a IngressTLS: allow secretName to be blank for SNI routing 2016-03-28 21:25:54 -04:00
k8s-merge-robot 95e09e303f Merge pull request #22965 from caesarxuchao/delete-UID-precondition
Auto commit by PR queue bot
2016-03-26 09:36:28 -07:00
goltermann 32d569d6c7 Fixing all the "composite literal uses unkeyed fields" Vet errors. 2016-03-25 15:25:09 -07:00
Chao Xu 31b425b3a1 add delete precondition 2016-03-25 11:21:39 -07:00
k8s-merge-robot 4e4ad61260 Merge pull request #23366 from goltermann/vet
Auto commit by PR queue bot
2016-03-24 21:50:56 -07:00
k8s-merge-robot 2777cd7e75 Merge pull request #23295 from hongchaodeng/error
Auto commit by PR queue bot
2016-03-23 02:27:36 -07:00
goltermann 34d4eaea08 Fixing several (but not all) go vet errors. Most are around string formatting, or unreachable code. 2016-03-22 17:26:50 -07:00
Hongchao Deng 189ce6e397 storage: add custom storage error 2016-03-22 08:19:16 -07:00
harry b0900bf0d4 Refactor diff into sub pkg 2016-03-21 20:21:39 +08:00
k8s-merge-robot 782ba437f1 Merge pull request #23003 from deads2k/no-proxy-cidr
Auto commit by PR queue bot
2016-03-17 14:16:11 -07:00
deads2k ab03317d96 support CIDRs in NO_PROXY 2016-03-16 16:22:54 -04:00
Timothy St. Clair d3da93c174 Renaming api/errors/etcd to api/errors/storage as it no longer
has any etcd specific dependencies.  Reference issue #17546
2016-03-15 20:23:47 -05:00
Jordan Liggitt a1c2267f20 Decrease parallelism in deletecollection test, lengthen test etcd certs 2016-03-12 18:30:12 -05:00
k8s-merge-robot 5f5ac27996 Merge pull request #22502 from caesarxuchao/ignore-notfound-etcd
Auto commit by PR queue bot
2016-03-11 15:53:51 -08:00
k8s-merge-robot 5db0feb202 Merge pull request #22017 from caesarxuchao/fix-21955
Auto commit by PR queue bot
2016-03-10 14:37:43 -08:00
Andy Goldstein cdd339505e Merge pull request #22758 from madhusudancs/replicaset-nonpointer-template
ReplicaSetSpec.Template shouldn't be a pointer.
2016-03-10 15:35:04 -05:00
Madhusudan.C.S db48dcf583 ReplicaSetSpec.Template shouldn't be a pointer.
PodTemplateSpec should be consistent for all the types in extensions/v1beta1.

See PR #19510.
2016-03-09 21:24:16 -08:00
Madhusudan.C.S e8ee3eda2a Pass ResourceVersion in Scale object back to RC before updating RC so that it can be used to check for conflicts. 2016-03-09 19:44:21 -08:00
Madhusudan.C.S fe26381c90 Support for both map-based and set-based selectors in extensions/v1beta1.Scale
Here are a list of changes along with an explanation of how they work:
1. Add a new string field called TargetSelector to the external version of
   extensions Scale type (extensions/v1beta1.Scale). This is a serialized
   version of either the map-based selector (in case of ReplicationControllers)
   or the unversioned.LabelSelector struct (in case of Deployments and
   ReplicaSets).
2. Change the selector field in the internal Scale type (extensions.Scale) to
   unversioned.LabelSelector.
3. Add conversion functions to convert from two external selector fields to a
   single internal selector field. The rules for conversion are as follows:
   i.   If the target resource that this scale targets supports LabelSelector
        (Deployments and ReplicaSets), then serialize the LabelSelector and
        store the string in the TargetSelector field in the external version
        and leave the map-based Selector field as nil.
   ii.  If the target resource only supports a map-based selector
        (ReplicationControllers), then still serialize that selector and
	store the serialized string in the TargetSelector field. Also,
	set the the Selector map field in the external Scale type.
   iii. When converting from external to internal version, parse the
        TargetSelector string into LabelSelector struct if the string isn't
	empty. If it is empty, then check if the Selector map is set and just
	assign that map to the MatchLabels component of the LabelSelector.
   iv.  When converting from internal to external version, serialize the
        LabelSelector and store it in the TargetSelector field. If only
	the MatchLabel component is set, then also copy that value to
	the Selector map field in the external version.
4. HPA now just converts the LabelSelector field to a Selector interface
   type to list the pods.
5. Scale Get and Update etcd methods for Deployments and ReplicaSets now
   return extensions.Scale instead of autoscaling.Scale.
6. Consequently, SubresourceGroupVersion override and is "autoscaling"
   enabled check is now removed from pkg/master/master.go
7. Other small changes to labels package, fuzzer and LabelSelector
   helpers to piece this all together.
8. Add unit tests to HPA targeting Deployments and ReplicaSets.
9. Add an e2e test to HPA targeting ReplicaSets.
2016-03-09 17:54:17 -08:00
k8s-merge-robot 89c9c24987 Merge pull request #21964 from caesarxuchao/fix-thirdparty-parameter
Auto commit by PR queue bot
2016-03-07 00:59:17 -08:00
Chao Xu ff446ece57 adding a test to make sure the ignore NotFound error patch is working 2016-03-05 22:32:58 -08:00
AdoHe 5fdfc4bde3 fix can not export service bug 2016-03-05 11:23:50 -05:00
k8s-merge-robot b198c820cd Merge pull request #22402 from erictune/psp-simplify
Auto commit by PR queue bot
2016-03-05 07:55:19 -08:00
k8s-merge-robot a435537e27 Merge pull request #21966 from madhusudancs/scale-deployment-replicaset
Auto commit by PR queue bot
2016-03-04 14:40:10 -08:00
Madhusudan.C.S fa0794098f Define etcd storage methods for replicationcontrollers/scale subresource.
Also register replicationcontrollers/scale subresource. Along with
registering the resource, also specify the cross-group override for the
subresource since Scale belongs belongs to autoscaling/v1 but
ReplicationController belongs to api/v1.
2016-03-04 11:02:37 -08:00
Madhusudan.C.S 9e99f9fa0e Register scale subresource for Deployments and ReplicaSets.
Define etcd registry methods for scale subresource in Deployments and
ReplicaSets. Register them with the API server.
2016-03-04 11:01:36 -08:00
Chao Xu a3a6130f44 ignore NotFound error in etcd 2016-03-03 13:26:40 -08:00
Eric Tune 4d090bfb09 Rename PodSecurityPolicy fields
In podSecurityPolicy:
1. Rename .seLinuxContext to .seLinux
2. Rename .seLinux.type to .seLinux.rule
3. Rename .runAsUser.type to .runAsUser.rule
4. Rename .seLinux.SELinuxOptions

1,2,3 as suggested by thockin in #22159.
I added 3 for consistency with 2.
2016-03-03 11:49:48 -08:00
k8s-merge-robot d81d823ca5 Merge pull request #22393 from eparis/blunderbuss
Auto commit by PR queue bot
2016-03-02 18:51:56 -08:00
Eric Paris 5e5a823294 Move blunderbuss assignees into tree 2016-03-02 20:46:32 -05:00
Chao Xu 8566056d18 revert 20202 2016-02-28 19:03:22 -08:00
k8s-merge-robot 43792754d8 Merge pull request #21469 from wojtek-t/parallel_namespace_deletion
Auto commit by PR queue bot
2016-02-27 07:26:49 -08:00
Fabio Yeon 375b4ca8d6 Revert "Revert 20202. Use other measures to prevent race in test-cmd.sh" 2016-02-26 19:25:08 -08:00
k8s-merge-robot a5ceafc48a Merge pull request #21175 from caesarxuchao/revert-20202
Auto commit by PR queue bot
2016-02-26 18:33:42 -08:00
Chao Xu 56bddbcae5 fix thirdparty discovery, add test 2016-02-25 14:13:28 -08:00
Eric Tune 875755f992 Added Selector Generation to Job.
Added selector generation to Job's
strategy.Validate, right before validation.
Can't do in defaulting since UID is not known.

Added a validation to Job to ensure that the generated
labels and selector are correct when generation was requested.
This happens right after generation, but validation is in a better
place to return an error.

Adds "manualSelector" field to batch/v1 Job to control selector generation.
Adds same field to extensions/__internal.  Conversion between those two
is automatic.

Adds "autoSelector" field to extensions/v1beta1 Job.  Used for storing batch/v1 Jobs
    - Default for v1 is to do generation.
    - Default for v1beta1 is to not do it.
    - In both cases, unset == false == do the default thing.

Release notes:
Added batch/v1 group, which contains just Job, and which is the next
version of extensions/v1beta1 Job.

The changes from the previous version are:
- Users no longer need to ensure labels on their pod template are unique to the enclosing
  job (but may add labels as needed for categorization).
- In v1beta1, job.spec.selector was defaulted from pod labels, with the user responsible for uniqueness.
  In v1, a unique label is generated and added to the pod template, and used as the selector (other
  labels added by user stay on pod template, but need not be used by selector).
- a new field called "manualSelector" field exists to control whether the new behavior is used,
  versus a more error-prone but more flexible "manual" (not generated) seletor.  Most users
  will not need to use this field and should leave it unset.

Users who are creating extensions.Job go objects and then posting them using the go client
will see a change in the default behavior.  They need to either stop providing a selector (relying on
selector generation) or else specify "spec.manualSelector" until they are ready to do the former.
2016-02-25 09:28:07 -08:00
Wojciech Tyczynski 506899008f Parallelization of namespace deletion 2016-02-25 16:33:25 +01:00
Chao Xu 5b108acb38 fix ParameterCodec for thirdparty resource 2016-02-24 21:45:30 -08:00
Dawn Chen a8c0ac88fc Merge pull request #21754 from kargakis/use-generation-for-deployments
Use generation for deployments
2016-02-23 16:42:16 -08:00
kargakis 418d79cb78 extensions: add observedGeneration for deployments 2016-02-23 18:47:40 +01:00
k8s-merge-robot d0ce85a6d1 Merge pull request #21687 from kargakis/generation-updates-for-label-annotation-changes
Auto commit by PR queue bot
2016-02-23 07:51:48 -08:00
kargakis 69cd75c6a8 registry: remove todos about rc/rs label/annotation updates 2016-02-23 10:24:02 +01:00
Wojciech Tyczynski 4eadc5e97b Introduce RESTOptions to configure per-resource storage 2016-02-22 16:28:28 +01:00
k8s-merge-robot 9b9d63ac5e Merge pull request #21340 from liggitt/delete-collection-not-found
Auto commit by PR queue bot
2016-02-22 03:37:42 -08:00
Chao Xu 4342a8c4f0 revert 20202 2016-02-19 11:37:54 -08:00
feihujiang ac9f890238 Support the subresource of service proxy 2016-02-18 15:16:05 +08:00
k8s-merge-robot ef505d8fa3 Merge pull request #19771 from derekwaynecarr/field_selector
Auto commit by PR queue bot
2016-02-17 09:20:40 -08:00
Jordan Liggitt d575a954fa Tolerate individual NotFound errors in DeleteCollection 2016-02-17 00:06:44 -05:00
derekwaynecarr 76f2cc6a11 Add field selector for pod.spec.restartPolicy 2016-02-16 16:51:42 -05:00
feihujiang e85253916f Support the subresource of node proxy 2016-02-16 17:02:45 +08:00
deads2k 9901a386c3 remove ResourceIsValid 2016-02-15 07:49:48 -05:00
k8s-merge-robot 957ce699af Merge pull request #20906 from kargakis/status-updates-in-deployments
Auto commit by PR queue bot
2016-02-13 18:24:36 -08:00
k8s-merge-robot dee94c5355 Merge pull request #20897 from wojtek-t/fix_identical_writes
Auto commit by PR queue bot
2016-02-13 06:24:34 -08:00
Wojciech Tyczynski 2e97793840 Don't store no-op updates in etcd. 2016-02-12 09:23:28 +01:00
Madhusudan.C.S 8c558088ee Allow a DaemonSet pod template to be updated in storage.
This should allow users to update DaemonSet pods by manually deleting
the corresponding running pods. Users can use this mechanism for
DaemonSet updates until we implement Deployment style rolling update
for DaemonSet.
2016-02-11 16:07:32 -08:00
Michail Kargakis 47a94fd051 registry: reject new labels on deployment status updates 2016-02-11 11:33:25 +01:00
Madhusudan.C.S ad9ba23995 Comment out DaemonSet update type fields and remove the code that depends on it.
Leaving the type fields as comments for reference and reminder. But
deleting the conversion, defaulting and validation code. They can
always be brough back from the previous PR once the types are
introduced. Because builds break without them anyway that serves as a
reminder, so there is no need to leave them commented out.
2016-02-10 15:44:01 -08:00
k8s-merge-robot 41a98b43e4 Merge pull request #19840 from madhusudancs/replicaset-deployment
Auto commit by PR queue bot
2016-02-09 18:57:42 -08:00
Madhusudan.C.S e7a9f30936 Address review comments. 2016-02-09 15:50:01 -08:00
k8s-merge-robot b98d9a21a1 Merge pull request #20818 from deads2k/remove-mixed-case
Auto commit by PR queue bot
2016-02-09 05:06:45 -08:00
Madhusudan.C.S ed7ad6dcf3 Make deployments work. 2016-02-08 21:27:49 -08:00
Madhusudan.C.S 518f08aa7c Move Deployments to ReplicaSets and switch the Deployment selector to the new LabelSelector.
Update the Deployments' API types, defaulting code, conversions, helpers
and validation to use ReplicaSets instead of ReplicationControllers and
LabelSelector instead of map[string]string for selectors.

Also update the Deployment controller, registry, kubectl subcommands,
client listers package and e2e tests to use ReplicaSets and
LabelSelector for Deployments.
2016-02-08 21:27:38 -08:00
k8s-merge-robot e5009bfdcd Merge pull request #19474 from endocode/container_names_in_kubectl_logs
Auto commit by PR queue bot
2016-02-08 14:22:22 -08:00
deads2k 6d71421ae1 eliminate mixed case from RESTMapper 2016-02-08 15:33:31 -05:00
Jan Chaloupka 4389b3f0d6 Rewritte util.* -> wait.* wherever reasonable 2016-02-07 12:02:20 +01:00
k8s-merge-robot b45a94bc78 Merge pull request #20765 from janetkuo/remove-podtemplate-key
Auto commit by PR queue bot
2016-02-06 00:44:47 -08:00