Commit Graph

1136 Commits (839f4f8dd22799e36efa54a209f743d336ed331a)

Author SHA1 Message Date
Tim Hockin 152c86ab06 Make name validators return string slices 2016-05-18 00:48:01 -07:00
k8s-merge-robot 272674b2a6 Merge pull request #25007 from liggitt/third-party-resource-validation
Automatic merge from submit-queue

validate third party resources

addresses validation portion of https://github.com/kubernetes/kubernetes/issues/22768

* ThirdPartyResource: validates name (3 segment DNS subdomain) and version names (single segment DNS label)
* ThirdPartyResourceData: validates objectmeta (name is validated as a DNS label)
* removes ability to use GenerateName with thirdpartyresources (kind and api group should not be randomized, in my opinion)

test improvements:
* updates resttest to clean up after create tests (so the same valid object can be used)
* updates resttest to take a name generator (in case "foo1" isn't a valid name for the object under test)

action required for alpha thirdpartyresource users:
* existing thirdpartyresource objects that do not match these validation rules will need to be removed/updated (after removing thirdpartyresourcedata objects stored under the disallowed versions, kind, or group names)
* existing thirdpartyresourcedata objects that do not match the name validation rule will not be able to be updated, but can be removed
2016-05-15 03:38:29 -07:00
Matt Liggett f5e8d41431 Finish implementing policy API.
Registry implementation and addition to the master.
2016-05-13 17:27:58 -07:00
Clayton Coleman 51b624103f
Change ConvertToVersion to use GroupVersion
Long delayed refactor, avoids a few more allocations.
2016-05-12 10:10:35 -04:00
k8s-merge-robot 132ebb091a Merge pull request #24459 from fgrzadkowski/unschedulable_pod
Automatic merge from submit-queue

Add pod condition PodScheduled to detect situation when scheduler tried to schedule a Pod, but failed

Set `PodSchedule` condition to `ConditionFalse` in `scheduleOne()` if scheduling failed and to `ConditionTrue` in `/bind` subresource.

Ref #24404

@mml (as it seems to be related to "why pending" effort)

<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/24459)
<!-- Reviewable:end -->
2016-05-12 05:54:06 -07:00
Filip Grzadkowski a80b1798c4 Add pod condition PodScheduled to detect situation
when scheduler tried to schedule a Pod, but failed.

Ref #24404
2016-05-12 10:21:21 +02:00
Paul Weil 56193b7140 PSP types 2016-05-11 18:07:35 -04:00
Jordan Liggitt 6c323a4f72 Remove name generation from thirdpartyresource 2016-05-09 09:27:54 -04:00
Piotr Szczesniak 212b459817 Move internal types of hpa from pkg/apis/extensions to pkg/apis/autoscaling 2016-05-09 09:18:13 +02:00
k8s-merge-robot 3ee833ca3b Merge pull request #25006 from liggitt/third-party-root-scope
Automatic merge from submit-queue

Make ThirdPartyResource a root scoped object

ThirdPartyResource (the registration of a third party type) belongs at the cluster scope. It results in resource handlers installed in every namespace, and the same name in two namespaces collides (namespace is ignored when determining group/kind).

ThirdPartyResourceData (an actual instance of that type) is still namespace-scoped.

This PR moves ThirdPartyResource to be a root scope object. Someone previously using ThirdPartyResource definitions in alpha should be able to move them from namespace to root scope like this:

setup (run on 1.2):
```
kubectl create ns ns1

echo '{"kind":"ThirdPartyResource","apiVersion":"extensions/v1beta1","metadata":{"name":"foo.example.com"},"versions":[{"name":"v8"}]}' | kubectl create -f - --namespace=ns1

echo '{"kind":"Foo","apiVersion":"example.com/v8","metadata":{"name":"MyFoo"},"testkey":"testvalue"}' | kubectl create -f - --namespace=ns1
```

export:
```
kubectl get thirdpartyresource --all-namespaces -o yaml > tprs.yaml
```

remove namespaced kind registrations (this shouldn't remove the data of that type, which is another possible issue):
```
kubectl delete -f tprs.yaml
```

... upgrade ...

re-register the custom types at the root scope:
```
kubectl create -f tprs.yaml
```

Additionally, pre-1.3 clients that expect to read/write ThirdPartyResource at a namespace scope will not be compatible with 1.3+ servers, and 1.3+ clients that expect to read/write ThirdPartyResource at a root scope will not be compatible with pre-1.3 servers.
2016-05-06 20:50:35 -07:00
Clayton Coleman e0ebcf4216
Split the storage and negotiation parts of Codecs
The codec factory should support two distinct interfaces - negotiating
for a serializer with a client, vs reading or writing data to a storage
form (etcd, disk, etc). Make the EncodeForVersion and DecodeToVersion
methods only take Encoder and Decoder, and slight refactoring elsewhere.

In the storage factory, use a content type to control what serializer to
pick, and use the universal deserializer. This ensures that storage can
read JSON (which might be from older objects) while only writing
protobuf. Add exceptions for those resources that may not be able to
write to protobuf (specifically third party resources, but potentially
others in the future).
2016-05-05 12:08:23 -04:00
k8s-merge-robot f5e1e9a227 Merge pull request #24912 from bprashanth/petset_controller
Automatic merge from submit-queue

Petset controller

Took longer than I expected. Main parts of this pr are:
1. Identity generation based on petset spec (volumes are mapped per discussion in #18016)
2. Ensure that we create/delete pets in sequence
3. Ensuring that we create, wait for healthy, create; or delete, wait for terminationGrace, delete
4. Controller that watches apiserver and drives actual -> desired

PVCs are not deleted, yet.
2016-05-05 08:58:23 -07:00
k8s-merge-robot 3faf214506 Merge pull request #24924 from mqliang/pv-prepare-update
Automatic merge from submit-queue

fix PrepareForUpdate bug for PV and PVC
2016-05-05 01:46:21 -07:00
Prashanth Balasubramanian 6bc3052551 PetSet alpha controller 2016-05-04 18:39:17 -07:00
mqliang 0109c08b9b fix PrepareForUpdate bug for HPA 2016-05-05 09:39:03 +08:00
Jordan Liggitt e41d504739 Move ThirdPartyResource to root scoped object 2016-04-30 01:06:07 -04:00
Clayton Coleman fdb110c859
Fix the rest of the code 2016-04-29 17:12:10 -04:00
Jordan Liggitt 1e5815872e Validate deletion timestamp doesn't change on update 2016-04-28 11:50:48 -04:00
mqliang 3bcd5b1648 fix PrepareForUpdate bug for PV and PVC 2016-04-28 19:13:02 +08:00
k8s-merge-robot d0b887e4e0 Merge pull request #24595 from zhouhaibing089/httpserverclose
Automatic merge from submit-queue

Uncomment the code that caused by #19254

Fix https://github.com/kubernetes/kubernetes/issues/24546.

@lavalamp
2016-04-28 01:41:16 -07:00
k8s-merge-robot 28bc4b32c2 Merge pull request #24532 from rsc/master
Automatic merge from submit-queue

apiserver latency reductions

Combined effect of these two commits on the latency observed by the 1000-node kubemark benchmark:

```
name                               old ms/op  new ms/op   delta
LIST_nodes_p50                      127 ±16%    121 ± 9%   -4.58%  (p=0.000 n=29+27)
LIST_nodes_p90                      326 ±12%    266 ±12%  -18.48%  (p=0.000 n=29+27)
LIST_nodes_p99                      453 ±11%    400 ±14%  -11.79%  (p=0.000 n=29+28)
LIST_replicationcontrollers_p50    29.4 ±49%   26.2 ±54%     ~     (p=0.085 n=30+29)
LIST_replicationcontrollers_p90    83.0 ±78%   68.6 ±59%  -17.33%  (p=0.013 n=30+28)
LIST_replicationcontrollers_p99     216 ±43%    177 ±49%  -17.68%  (p=0.000 n=29+29)
DELETE_pods_p50                    24.5 ±14%   24.3 ±13%     ~     (p=0.562 n=30+29)
DELETE_pods_p90                    30.7 ± 1%   30.7 ± 1%   -0.30%  (p=0.011 n=29+29)
DELETE_pods_p99                    77.2 ±34%   54.2 ±23%  -29.76%  (p=0.000 n=30+27)
PUT_replicationcontrollers_p50     5.86 ±26%   5.94 ±32%     ~     (p=0.734 n=29+29)
PUT_replicationcontrollers_p90     15.8 ± 7%   15.5 ± 6%   -2.06%  (p=0.010 n=29+29)
PUT_replicationcontrollers_p99     57.8 ±35%   39.5 ±55%  -31.60%  (p=0.000 n=29+29)
PUT_nodes_p50                      14.9 ± 2%   14.8 ± 2%   -0.68%  (p=0.012 n=30+27)
PUT_nodes_p90                      16.5 ± 1%   16.3 ± 2%   -0.90%  (p=0.000 n=27+28)
PUT_nodes_p99                      57.9 ±47%   41.3 ±35%  -28.61%  (p=0.000 n=30+28)
POST_replicationcontrollers_p50    6.35 ±29%   6.34 ±20%     ~     (p=0.944 n=30+28)
POST_replicationcontrollers_p90    15.4 ± 5%   15.0 ± 5%   -2.18%  (p=0.001 n=29+29)
POST_replicationcontrollers_p99    52.2 ±71%   32.9 ±46%  -36.99%  (p=0.000 n=29+27)
POST_pods_p50                      8.99 ±13%   8.95 ±16%     ~     (p=0.903 n=30+29)
POST_pods_p90                      16.2 ± 4%   16.1 ± 4%     ~     (p=0.287 n=29+29)
POST_pods_p99                      30.9 ±21%   26.4 ±12%  -14.73%  (p=0.000 n=28+28)
POST_bindings_p50                  9.34 ±12%   8.92 ±15%   -4.54%  (p=0.013 n=30+28)
POST_bindings_p90                  16.6 ± 1%   16.5 ± 3%   -0.73%  (p=0.017 n=28+29)
POST_bindings_p99                  23.5 ± 9%   21.1 ± 4%  -10.09%  (p=0.000 n=27+28)
PUT_pods_p50                       10.8 ±11%   10.2 ± 5%   -5.47%  (p=0.000 n=30+27)
PUT_pods_p90                       16.1 ± 1%   16.0 ± 1%   -0.64%  (p=0.000 n=29+28)
PUT_pods_p99                       23.4 ± 9%   20.9 ± 9%  -10.93%  (p=0.000 n=28+27)
DELETE_replicationcontrollers_p50  2.42 ±16%   2.50 ±13%     ~     (p=0.054 n=29+28)
DELETE_replicationcontrollers_p90  11.5 ±12%   11.8 ±13%     ~     (p=0.141 n=30+28)
DELETE_replicationcontrollers_p99  19.5 ±21%   19.1 ±21%     ~     (p=0.397 n=29+29)
GET_nodes_p50                      0.77 ±10%   0.76 ±10%     ~     (p=0.317 n=28+28)
GET_nodes_p90                      1.20 ±16%   1.14 ±24%   -4.66%  (p=0.036 n=28+29)
GET_nodes_p99                      11.4 ±48%    7.5 ±46%  -34.28%  (p=0.000 n=28+29)
GET_replicationcontrollers_p50     0.74 ±17%   0.73 ±17%     ~     (p=0.222 n=30+28)
GET_replicationcontrollers_p90     1.04 ±25%   1.01 ±27%     ~     (p=0.231 n=30+29)
GET_replicationcontrollers_p99     12.1 ±81%  10.0 ±145%     ~     (p=0.063 n=28+29)
GET_pods_p50                       0.78 ±12%   0.77 ±10%     ~     (p=0.178 n=30+28)
GET_pods_p90                       1.06 ±19%   1.02 ±19%     ~     (p=0.120 n=29+28)
GET_pods_p99                       3.92 ±43%   2.45 ±38%  -37.55%  (p=0.000 n=27+25)
LIST_services_p50                  0.20 ±13%   0.20 ±16%     ~     (p=0.854 n=28+29)
LIST_services_p90                  0.28 ±15%   0.27 ±14%     ~     (p=0.219 n=29+28)
LIST_services_p99                  0.49 ±20%   0.47 ±24%     ~     (p=0.140 n=29+29)
LIST_endpoints_p50                 0.19 ±14%   0.19 ±15%     ~     (p=0.709 n=29+29)
LIST_endpoints_p90                 0.26 ±16%   0.26 ±13%     ~     (p=0.274 n=29+28)
LIST_endpoints_p99                 0.46 ±24%   0.44 ±21%     ~     (p=0.111 n=29+29)
LIST_horizontalpodautoscalers_p50  0.16 ±15%   0.15 ±13%     ~     (p=0.253 n=30+27)
LIST_horizontalpodautoscalers_p90  0.22 ±24%   0.21 ±16%     ~     (p=0.152 n=30+28)
LIST_horizontalpodautoscalers_p99  0.31 ±33%   0.31 ±38%     ~     (p=0.817 n=28+29)
LIST_daemonsets_p50                0.16 ±20%   0.15 ±11%     ~     (p=0.135 n=30+27)
LIST_daemonsets_p90                0.22 ±18%   0.21 ±25%     ~     (p=0.135 n=29+28)
LIST_daemonsets_p99                0.29 ±28%   0.29 ±32%     ~     (p=0.606 n=28+28)
LIST_jobs_p50                      0.16 ±16%   0.15 ±12%     ~     (p=0.375 n=29+28)
LIST_jobs_p90                      0.22 ±18%   0.21 ±16%     ~     (p=0.090 n=29+26)
LIST_jobs_p99                      0.31 ±28%   0.28 ±35%  -10.29%  (p=0.005 n=29+27)
LIST_deployments_p50               0.15 ±16%   0.15 ±13%     ~     (p=0.565 n=29+28)
LIST_deployments_p90               0.22 ±22%   0.21 ±19%     ~     (p=0.107 n=30+28)
LIST_deployments_p99               0.31 ±27%   0.29 ±34%     ~     (p=0.068 n=29+28)
LIST_namespaces_p50                0.21 ±25%   0.21 ±26%     ~     (p=0.768 n=29+27)
LIST_namespaces_p90                0.28 ±29%   0.26 ±25%     ~     (p=0.101 n=30+28)
LIST_namespaces_p99                0.30 ±48%   0.29 ±42%     ~     (p=0.339 n=30+29)
LIST_replicasets_p50               0.15 ±18%   0.15 ±16%     ~     (p=0.612 n=30+28)
LIST_replicasets_p90               0.22 ±19%   0.21 ±18%   -5.13%  (p=0.011 n=28+27)
LIST_replicasets_p99               0.31 ±39%   0.28 ±29%     ~     (p=0.066 n=29+28)
LIST_persistentvolumes_p50         0.16 ±23%   0.15 ±21%     ~     (p=0.124 n=30+29)
LIST_persistentvolumes_p90         0.21 ±23%   0.20 ±23%     ~     (p=0.092 n=30+25)
LIST_persistentvolumes_p99         0.21 ±24%   0.20 ±23%     ~     (p=0.053 n=30+25)
LIST_resourcequotas_p50            0.16 ±12%   0.16 ±13%     ~     (p=0.175 n=27+28)
LIST_resourcequotas_p90            0.20 ±22%   0.20 ±24%     ~     (p=0.388 n=30+28)
LIST_resourcequotas_p99            0.22 ±24%   0.22 ±23%     ~     (p=0.575 n=30+28)
LIST_persistentvolumeclaims_p50    0.15 ±21%   0.15 ±29%     ~     (p=0.079 n=30+28)
LIST_persistentvolumeclaims_p90    0.19 ±26%   0.18 ±34%     ~     (p=0.446 n=29+29)
LIST_persistentvolumeclaims_p99    0.19 ±26%   0.18 ±34%     ~     (p=0.446 n=29+29)
LIST_pods_p50                      68.0 ±16%   56.3 ± 9%  -17.19%  (p=0.000 n=29+28)
LIST_pods_p90                       119 ±19%     93 ± 8%  -21.88%  (p=0.000 n=28+28)
LIST_pods_p99                       230 ±18%    202 ±14%  -12.13%  (p=0.000 n=27+28)
```
2016-04-27 08:32:18 -07:00
Timothy St. Clair 24b4286960 In preparation for new storage backends renaming generic registry store 2016-04-26 08:32:13 -05:00
k8s-merge-robot 293b0d0815 Merge pull request #23493 from soltysh/move_job_internals
Automatic merge from submit-queue

Move internal types of job from pkg/apis/extensions to pkg/apis/batch

This addressed the job part of #23216, this is still WIP. Will notify once finished. I'd like to have it in before starting working on ScheduledJob. 

@lavalamp @erictune fyi
2016-04-25 20:58:49 -07:00
zhouhaibing089 bf1a3f99c0 Uncomment the code that cause by #19254 2016-04-25 23:21:31 +08:00
Maciej Szulik a3b4447305 Move internal types of job from pkg/apis/extensions to pkg/apis/batch 2016-04-25 11:03:54 +02:00
Clayton Coleman 3111985564 Handle streaming serializers more consistently
Add tests to watch behavior in both protocols (http and websocket)
against all 3 media types. Adopt the
`application/vnd.kubernetes.protobuf;stream=watch` media type for the
content that comes back from a watch call so that it can be
distinguished from a Status result.
2016-04-22 11:07:24 -04:00
Russ Cox 58629a28e4 pkg/registry/pod: avoid allocation in common pod search
PodToSelectableFields creates a map of field attributes
for a particular pod filter query to use. If the result
of the query does not depend on the fields at all, avoid
creating the map.

This is the source of about half the allocated memory
(by byte volume) during the kubemark benchmark, and it
is in turn the main driver of CPU usage during the benchmark,
because of the many background pod watches going on,
as well as the occasional list pods.

These benchmarks for 1000-node kubemark show the difference
from my previous CL (caching timers) to this CL:

name                               old ms/op   new ms/op   delta
LIST_nodes_p50                       124 ±13%    121 ± 9%     ~     (p=0.136 n=29+27)
LIST_nodes_p90                       278 ±15%    266 ±12%   -4.26%  (p=0.031 n=29+27)
LIST_nodes_p99                       405 ±19%    400 ±14%     ~     (p=0.864 n=28+28)
LIST_pods_p50                       65.3 ±13%   56.3 ± 9%  -13.75%  (p=0.000 n=29+28)
LIST_pods_p90                        115 ±12%     93 ± 8%  -18.75%  (p=0.000 n=27+28)
LIST_pods_p99                        226 ±21%    202 ±14%  -10.52%  (p=0.000 n=28+28)
LIST_replicationcontrollers_p50     26.6 ±43%   26.2 ±54%     ~     (p=0.487 n=29+29)
LIST_replicationcontrollers_p90     68.7 ±63%   68.6 ±59%     ~     (p=0.931 n=29+28)
LIST_replicationcontrollers_p99      173 ±41%    177 ±49%     ~     (p=0.618 n=28+29)
PUT_replicationcontrollers_p50      5.83 ±36%   5.94 ±32%     ~     (p=0.818 n=28+29)
PUT_replicationcontrollers_p90      15.9 ± 6%   15.5 ± 6%   -2.23%  (p=0.019 n=28+29)
PUT_replicationcontrollers_p99      56.7 ±41%   39.5 ±55%  -30.29%  (p=0.000 n=28+29)
DELETE_pods_p50                     24.3 ±17%   24.3 ±13%     ~     (p=0.855 n=28+29)
DELETE_pods_p90                     30.6 ± 0%   30.7 ± 1%     ~     (p=0.140 n=28+29)
DELETE_pods_p99                     56.3 ±27%   54.2 ±23%     ~     (p=0.188 n=28+27)
PUT_nodes_p50                       14.9 ± 1%   14.8 ± 2%     ~     (p=0.781 n=28+27)
PUT_nodes_p90                       16.4 ± 2%   16.3 ± 2%     ~     (p=0.321 n=28+28)
PUT_nodes_p99                       44.6 ±42%   41.3 ±35%     ~     (p=0.361 n=29+28)
POST_replicationcontrollers_p50     6.33 ±23%   6.34 ±20%     ~     (p=0.993 n=28+28)
POST_replicationcontrollers_p90     15.2 ± 6%   15.0 ± 5%     ~     (p=0.106 n=28+29)
POST_replicationcontrollers_p99     53.4 ±52%   32.9 ±46%  -38.41%  (p=0.000 n=27+27)
POST_pods_p50                       9.33 ±13%   8.95 ±16%     ~     (p=0.069 n=29+29)
POST_pods_p90                       16.3 ± 4%   16.1 ± 4%   -1.43%  (p=0.044 n=29+29)
POST_pods_p99                       28.4 ±23%   26.4 ±12%   -7.05%  (p=0.004 n=29+28)
DELETE_replicationcontrollers_p50   2.50 ±13%   2.50 ±13%     ~     (p=0.649 n=29+28)
DELETE_replicationcontrollers_p90   11.7 ±10%   11.8 ±13%     ~     (p=0.863 n=28+28)
DELETE_replicationcontrollers_p99   19.0 ±22%   19.1 ±21%     ~     (p=0.818 n=28+29)
PUT_pods_p50                        10.3 ± 5%   10.2 ± 5%     ~     (p=0.235 n=28+27)
PUT_pods_p90                        16.0 ± 1%   16.0 ± 1%     ~     (p=0.380 n=29+28)
PUT_pods_p99                        21.6 ±14%   20.9 ± 9%   -3.15%  (p=0.010 n=28+27)
POST_bindings_p50                   8.98 ±17%   8.92 ±15%     ~     (p=0.666 n=29+28)
POST_bindings_p90                   16.5 ± 2%   16.5 ± 3%     ~     (p=0.840 n=26+29)
POST_bindings_p99                   21.4 ± 5%   21.1 ± 4%   -1.21%  (p=0.049 n=27+28)
GET_nodes_p90                       1.18 ±19%   1.14 ±24%     ~     (p=0.137 n=29+29)
GET_nodes_p99                       8.29 ±40%   7.50 ±46%     ~     (p=0.106 n=28+29)
GET_replicationcontrollers_p90      1.03 ±21%   1.01 ±27%     ~     (p=0.489 n=29+29)
GET_replicationcontrollers_p99     10.0 ±123%  10.0 ±145%     ~     (p=0.794 n=28+29)
GET_pods_p90                        1.08 ±21%   1.02 ±19%     ~     (p=0.083 n=29+28)
GET_pods_p99                        2.81 ±39%   2.45 ±38%  -12.78%  (p=0.021 n=28+25)

Overall the two CLs combined have this effect:

name                               old ms/op  new ms/op   delta
LIST_nodes_p50                      127 ±16%    121 ± 9%   -4.58%  (p=0.000 n=29+27)
LIST_nodes_p90                      326 ±12%    266 ±12%  -18.48%  (p=0.000 n=29+27)
LIST_nodes_p99                      453 ±11%    400 ±14%  -11.79%  (p=0.000 n=29+28)
LIST_replicationcontrollers_p50    29.4 ±49%   26.2 ±54%     ~     (p=0.085 n=30+29)
LIST_replicationcontrollers_p90    83.0 ±78%   68.6 ±59%  -17.33%  (p=0.013 n=30+28)
LIST_replicationcontrollers_p99     216 ±43%    177 ±49%  -17.68%  (p=0.000 n=29+29)
DELETE_pods_p50                    24.5 ±14%   24.3 ±13%     ~     (p=0.562 n=30+29)
DELETE_pods_p90                    30.7 ± 1%   30.7 ± 1%   -0.30%  (p=0.011 n=29+29)
DELETE_pods_p99                    77.2 ±34%   54.2 ±23%  -29.76%  (p=0.000 n=30+27)
PUT_replicationcontrollers_p50     5.86 ±26%   5.94 ±32%     ~     (p=0.734 n=29+29)
PUT_replicationcontrollers_p90     15.8 ± 7%   15.5 ± 6%   -2.06%  (p=0.010 n=29+29)
PUT_replicationcontrollers_p99     57.8 ±35%   39.5 ±55%  -31.60%  (p=0.000 n=29+29)
PUT_nodes_p50                      14.9 ± 2%   14.8 ± 2%   -0.68%  (p=0.012 n=30+27)
PUT_nodes_p90                      16.5 ± 1%   16.3 ± 2%   -0.90%  (p=0.000 n=27+28)
PUT_nodes_p99                      57.9 ±47%   41.3 ±35%  -28.61%  (p=0.000 n=30+28)
POST_replicationcontrollers_p50    6.35 ±29%   6.34 ±20%     ~     (p=0.944 n=30+28)
POST_replicationcontrollers_p90    15.4 ± 5%   15.0 ± 5%   -2.18%  (p=0.001 n=29+29)
POST_replicationcontrollers_p99    52.2 ±71%   32.9 ±46%  -36.99%  (p=0.000 n=29+27)
POST_pods_p50                      8.99 ±13%   8.95 ±16%     ~     (p=0.903 n=30+29)
POST_pods_p90                      16.2 ± 4%   16.1 ± 4%     ~     (p=0.287 n=29+29)
POST_pods_p99                      30.9 ±21%   26.4 ±12%  -14.73%  (p=0.000 n=28+28)
POST_bindings_p50                  9.34 ±12%   8.92 ±15%   -4.54%  (p=0.013 n=30+28)
POST_bindings_p90                  16.6 ± 1%   16.5 ± 3%   -0.73%  (p=0.017 n=28+29)
POST_bindings_p99                  23.5 ± 9%   21.1 ± 4%  -10.09%  (p=0.000 n=27+28)
PUT_pods_p50                       10.8 ±11%   10.2 ± 5%   -5.47%  (p=0.000 n=30+27)
PUT_pods_p90                       16.1 ± 1%   16.0 ± 1%   -0.64%  (p=0.000 n=29+28)
PUT_pods_p99                       23.4 ± 9%   20.9 ± 9%  -10.93%  (p=0.000 n=28+27)
DELETE_replicationcontrollers_p50  2.42 ±16%   2.50 ±13%     ~     (p=0.054 n=29+28)
DELETE_replicationcontrollers_p90  11.5 ±12%   11.8 ±13%     ~     (p=0.141 n=30+28)
DELETE_replicationcontrollers_p99  19.5 ±21%   19.1 ±21%     ~     (p=0.397 n=29+29)
GET_nodes_p50                      0.77 ±10%   0.76 ±10%     ~     (p=0.317 n=28+28)
GET_nodes_p90                      1.20 ±16%   1.14 ±24%   -4.66%  (p=0.036 n=28+29)
GET_nodes_p99                      11.4 ±48%    7.5 ±46%  -34.28%  (p=0.000 n=28+29)
GET_replicationcontrollers_p50     0.74 ±17%   0.73 ±17%     ~     (p=0.222 n=30+28)
GET_replicationcontrollers_p90     1.04 ±25%   1.01 ±27%     ~     (p=0.231 n=30+29)
GET_replicationcontrollers_p99     12.1 ±81%  10.0 ±145%     ~     (p=0.063 n=28+29)
GET_pods_p50                       0.78 ±12%   0.77 ±10%     ~     (p=0.178 n=30+28)
GET_pods_p90                       1.06 ±19%   1.02 ±19%     ~     (p=0.120 n=29+28)
GET_pods_p99                       3.92 ±43%   2.45 ±38%  -37.55%  (p=0.000 n=27+25)
LIST_services_p50                  0.20 ±13%   0.20 ±16%     ~     (p=0.854 n=28+29)
LIST_services_p90                  0.28 ±15%   0.27 ±14%     ~     (p=0.219 n=29+28)
LIST_services_p99                  0.49 ±20%   0.47 ±24%     ~     (p=0.140 n=29+29)
LIST_endpoints_p50                 0.19 ±14%   0.19 ±15%     ~     (p=0.709 n=29+29)
LIST_endpoints_p90                 0.26 ±16%   0.26 ±13%     ~     (p=0.274 n=29+28)
LIST_endpoints_p99                 0.46 ±24%   0.44 ±21%     ~     (p=0.111 n=29+29)
LIST_horizontalpodautoscalers_p50  0.16 ±15%   0.15 ±13%     ~     (p=0.253 n=30+27)
LIST_horizontalpodautoscalers_p90  0.22 ±24%   0.21 ±16%     ~     (p=0.152 n=30+28)
LIST_horizontalpodautoscalers_p99  0.31 ±33%   0.31 ±38%     ~     (p=0.817 n=28+29)
LIST_daemonsets_p50                0.16 ±20%   0.15 ±11%     ~     (p=0.135 n=30+27)
LIST_daemonsets_p90                0.22 ±18%   0.21 ±25%     ~     (p=0.135 n=29+28)
LIST_daemonsets_p99                0.29 ±28%   0.29 ±32%     ~     (p=0.606 n=28+28)
LIST_jobs_p50                      0.16 ±16%   0.15 ±12%     ~     (p=0.375 n=29+28)
LIST_jobs_p90                      0.22 ±18%   0.21 ±16%     ~     (p=0.090 n=29+26)
LIST_jobs_p99                      0.31 ±28%   0.28 ±35%  -10.29%  (p=0.005 n=29+27)
LIST_deployments_p50               0.15 ±16%   0.15 ±13%     ~     (p=0.565 n=29+28)
LIST_deployments_p90               0.22 ±22%   0.21 ±19%     ~     (p=0.107 n=30+28)
LIST_deployments_p99               0.31 ±27%   0.29 ±34%     ~     (p=0.068 n=29+28)
LIST_namespaces_p50                0.21 ±25%   0.21 ±26%     ~     (p=0.768 n=29+27)
LIST_namespaces_p90                0.28 ±29%   0.26 ±25%     ~     (p=0.101 n=30+28)
LIST_namespaces_p99                0.30 ±48%   0.29 ±42%     ~     (p=0.339 n=30+29)
LIST_replicasets_p50               0.15 ±18%   0.15 ±16%     ~     (p=0.612 n=30+28)
LIST_replicasets_p90               0.22 ±19%   0.21 ±18%   -5.13%  (p=0.011 n=28+27)
LIST_replicasets_p99               0.31 ±39%   0.28 ±29%     ~     (p=0.066 n=29+28)
LIST_persistentvolumes_p50         0.16 ±23%   0.15 ±21%     ~     (p=0.124 n=30+29)
LIST_persistentvolumes_p90         0.21 ±23%   0.20 ±23%     ~     (p=0.092 n=30+25)
LIST_persistentvolumes_p99         0.21 ±24%   0.20 ±23%     ~     (p=0.053 n=30+25)
LIST_resourcequotas_p50            0.16 ±12%   0.16 ±13%     ~     (p=0.175 n=27+28)
LIST_resourcequotas_p90            0.20 ±22%   0.20 ±24%     ~     (p=0.388 n=30+28)
LIST_resourcequotas_p99            0.22 ±24%   0.22 ±23%     ~     (p=0.575 n=30+28)
LIST_persistentvolumeclaims_p50    0.15 ±21%   0.15 ±29%     ~     (p=0.079 n=30+28)
LIST_persistentvolumeclaims_p90    0.19 ±26%   0.18 ±34%     ~     (p=0.446 n=29+29)
LIST_persistentvolumeclaims_p99    0.19 ±26%   0.18 ±34%     ~     (p=0.446 n=29+29)
LIST_pods_p50                      68.0 ±16%   56.3 ± 9%  -17.19%  (p=0.000 n=29+28)
LIST_pods_p90                       119 ±19%     93 ± 8%  -21.88%  (p=0.000 n=28+28)
LIST_pods_p99                       230 ±18%    202 ±14%  -12.13%  (p=0.000 n=27+28)
2016-04-21 15:53:47 -04:00
Prashanth Balasubramanian 0ac10c6cc2 PetSet type, apps apigroup 2016-04-20 18:49:31 -07:00
Clayton Coleman a5ff573263 ThirdPartyResourceCodec should implement streaming.Framer
Wrappers must proxy NewFrameReader|Writer for now (until we potentially
refactor the codec factory to separate them).
2016-04-18 21:24:26 -04:00
k8s-merge-robot 2bf52175f9 Merge pull request #23923 from hongchaodeng/exp
Automatic merge from submit-queue

Decouple etcd node.expiration logic from DeleitonTimestamp

ref: https://github.com/kubernetes/kubernetes/issues/23902
2016-04-17 04:12:26 -07:00
k8s-merge-robot a275a045d1 Merge pull request #23914 from sky-uk/make-etcd-cache-size-configurable
Automatic merge from submit-queue

Make etcd cache size configurable

Instead of the prior 50K limit, allow users to specify a more sensible size for their cluster.

I'm not sure what a sensible default is here. I'm still experimenting on my own clusters. 50 gives me a 270MB max footprint. 50K caused my apiserver to run out of memory as it exceeded >2GB. I believe that number is far too large for most people's use cases.

There are some other fundamental issues that I'm not addressing here:
- Old etcd items are cached and potentially never removed (it stores using modifiedIndex, and doesn't remove the old object when it gets updated)
- Cache isn't LRU, so there's no guarantee the cache remains hot. This makes its performance difficult to predict. More of an issue with a smaller cache size.
- 1.2 etcd entries seem to have a larger memory footprint (I never had an issue in 1.1, even though this cache existed there). I suspect that's due to image lists on the node status.

This is provided as a fix for #23323
2016-04-17 00:06:31 -07:00
Hongchao Deng b9745999c9 Decouple etcd node.expiration logic from DeleitonTimestamp 2016-04-13 15:11:53 -07:00
Daniel Smith 4c539bf082 Merge pull request #23490 from wojtek-t/remove_set_from_storage_interface
Remove Set() from storage.Interface.
2016-04-13 14:22:05 -07:00
k8s-merge-robot f5e8e7453b Merge pull request #23806 from smarterclayton/streaming_watch
Automatic merge from submit-queue

Implement a streaming serializer for watch

Changeover watch to use streaming serialization. Properly version the
watch objects. Implement simple framing for JSON and Protobuf (but not
YAML).

@wojtek-t @lavalamp
2016-04-13 05:18:17 -07:00
k8s-merge-robot acf9492cb1 Merge pull request #23660 from goltermann/vetclean
Automatic merge from submit-queue

Additional go vet fixes

Mostly:
- pass lock by value
- bad syntax for struct tag value
- example functions not formatted properly
2016-04-12 06:22:16 -07:00
James Ravn 5bb0595260 Make deserialization cache size configurable
Instead of the default 50K entries, allow users to specify more sensible
sizes for their cluster.
2016-04-12 13:42:27 +01:00
Clayton Coleman 3474911736 Implement a streaming serializer for watch
Changeover watch to use streaming serialization. Properly version the
watch objects. Implement simple framing for JSON and Protobuf (but not
YAML).
2016-04-11 11:22:05 -04:00
goltermann 696423e044 Vet fixes, mostly pass lock by value errors. 2016-04-06 11:27:40 -07:00
Wojciech Tyczynski 53f433f019 Remove Set() from storage.Interface. 2016-04-04 17:54:18 +02:00
k8s-merge-robot f5c93c8ddc Merge pull request #23472 from wojtek-t/fix_object_meta_for
Automatic merge from submit-queue

Switch from api.ObjectMetaFor to meta.Accessor in most of places

Fix #23278

@smarterclayton @lavalamp
2016-04-02 02:33:40 -07:00
Brendan Burns be6c5b332b Add third party support to kubectl 2016-03-31 10:53:32 -07:00
Wojciech Tyczynski 2699be2e7e Switch api.ObjetaMetaFor to meta.Accessor 2016-03-31 17:52:31 +02:00
Tommy Murphy 4d22c2fd6a IngressTLS: allow secretName to be blank for SNI routing 2016-03-28 21:25:54 -04:00
k8s-merge-robot 95e09e303f Merge pull request #22965 from caesarxuchao/delete-UID-precondition
Auto commit by PR queue bot
2016-03-26 09:36:28 -07:00
goltermann 32d569d6c7 Fixing all the "composite literal uses unkeyed fields" Vet errors. 2016-03-25 15:25:09 -07:00
Chao Xu 31b425b3a1 add delete precondition 2016-03-25 11:21:39 -07:00
k8s-merge-robot 4e4ad61260 Merge pull request #23366 from goltermann/vet
Auto commit by PR queue bot
2016-03-24 21:50:56 -07:00
k8s-merge-robot 2777cd7e75 Merge pull request #23295 from hongchaodeng/error
Auto commit by PR queue bot
2016-03-23 02:27:36 -07:00
goltermann 34d4eaea08 Fixing several (but not all) go vet errors. Most are around string formatting, or unreachable code. 2016-03-22 17:26:50 -07:00
Hongchao Deng 189ce6e397 storage: add custom storage error 2016-03-22 08:19:16 -07:00