Commit Graph

2603 Commits (4383c7d6482b11f3befc5393134bc6cb120bf2f5)

Author SHA1 Message Date
Jan Chaloupka 3cc15363bc Run make update 2018-06-06 00:12:40 +02:00
Jan Chaloupka ab616a88b9 Promote sysctl annotations to API fields 2018-06-05 23:17:00 +02:00
Kubernetes Submit Queue c178c7fd65
Merge pull request #62005 from mikedanese/svcacctproj
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

implement ServiceAccountTokenProjection

design here: https://github.com/kubernetes/community/pull/1973

part of https://github.com/kubernetes/kubernetes/pull/61858

```release-note
Add a volume projection that is able to project service account tokens.
```

part of https://github.com/kubernetes/kubernetes/issues/48408

@kubernetes/sig-auth-pr-reviews @kubernetes/sig-storage-pr-reviews
2018-06-05 09:30:56 -07:00
Kubernetes Submit Queue 0647cff9ff
Merge pull request #64386 from andyzhangx/azuredisk-sizegrow
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add azuredisk PV size grow feature

**What this PR does / why we need it**:
According to kubernetes/features#284, add size grow feature for azure disk

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #56463

**Special notes for your reviewer**:
 - This feature is ony for azure managed disk, and if that disk is already attached to a running VM, disk resize will fail as following:
```
$ kubectl describe pvc pvc-azuredisk
Events:
  Type     Reason              Age               From           Message
  ----     ------              ----              ----           -------
  Warning  VolumeResizeFailed  51s (x3 over 3m)  volume_expand  Error expanding volume "default/pvc-azuredisk" of plugin kubernetes.io/azure-disk : disk.DisksClient#CreateOrUpdate: Failure responding to request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=409 Code="OperationNotAllowed" Message="Cannot resize disk andy-mg1102-dynamic-pvc-d2d00dd9-6185-11e8-a6c3-000d3a0643a8 while it is attached to running VM /subscriptions/.../resourceGroups/.../providers/Microsoft.Compute/virtualMachines/k8s-agentpool-17607330-0."
```

**How to use this feature**
 - `kubectl edit pvc pvc-azuredisk` to change azuredisk PVC size from 6GB to 10GB
```
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
...
    volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/azure-disk
  creationTimestamp: 2018-05-27T08:13:23Z
  finalizers:
  - kubernetes.io/pvc-protection
  name: pvc-azuredisk
...
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 6Gi
  storageClassName: hdd
  volumeMode: Filesystem
  volumeName: pvc-d2d00dd9-6185-11e8-a6c3-000d3a0643a8
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 6Gi
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2018-05-27T08:14:34Z
    message: Waiting for user to (re-)start a pod to finish file system resize of
      volume on node.
    status: "True"
    type: FileSystemResizePending
  phase: Bound
```

 - After resized, `/mnt/disk` is still 6GB
```
$ kubectl exec -it nginx-azuredisk -- bash
# df -h
Filesystem      Size  Used Avail Use% Mounted on
...
/dev/sdf        5.8G   15M  5.5G   1% /mnt/disk
...
```

 - After user run `sudo resize2fs /dev/sdf` in agent node, `/mnt/disk` becomes 10GB now:
```
$ kubectl exec -it nginx-azuredisk -- bash
# df -h
Filesystem      Size  Used Avail Use% Mounted on
...
/dev/sdf        9.8G   16M  9.3G   1% /mnt/disk
...
```

**Release note**:

```
Add azuredisk size grow feature
```

/sig azure
/assign @feiskyer @karataliu @gnufied 
cc @khenidak
2018-06-05 00:02:34 -07:00
Mike Danese 91feb345aa implement service account token projection 2018-06-04 17:22:08 -07:00
Kubernetes Submit Queue 46d2b47156
Merge pull request #57963 from vikaschoudhary16/priorityclass
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Introduce priority class in the resource quota

**What this PR does / why we need it**:
Implements https://github.com/kubernetes/community/pull/933
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #48648 

**Special notes for your reviewer**:
Test Cases are still to be covered. Opening this PR to make discussion convenient with code references.
Will update test cases only after design PR has got merged.

**Release note**:

```release-note
Ability to quota resources by priority
```
/kind feature
/priority important-soon
/sig scheduling
/sig node
/cc @resouer @derekwaynecarr @sjenning @bsalamat @timstclair @aveshagarwal @ravisantoshgudimetla
2018-06-04 15:11:00 -07:00
vikaschoudhary16 3cfe6412c7 Introduce priority class in the resource quota 2018-06-04 16:14:54 -04:00
Cao Shufeng 241422879d Log policy name from pod security policy 2018-06-04 19:24:25 +08:00
andyzhangx 880b7a3bda azuredisk size grow feature
fix comments

fix comments
2018-06-03 13:55:49 +00:00
Kubernetes Submit Queue a1c8d3f5f3
Merge pull request #64403 from jsafrane/aws-read-only-attach
Automatic merge from submit-queue (batch tested with PRs 57082, 64325, 64016, 64443, 64403). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Allow AWS EBS volumes to be attached as ReadOnly.

**Which issue(s) this PR fixes**
Fixes #64402

**Special notes for your reviewer**:
This follows logic e.g. in Cinder volume plugin.

**Release note**:

```release-note
AWS EBS volumes can be now used as ReadOnly in pods.
```

/sig storage
/sig aws
2018-05-30 18:49:23 -07:00
Minhan Xia 9fe2c53624 include patch permission for kubelets 2018-05-30 11:15:47 -07:00
Jan Safranek 8ff0fff065 Allow AWS EBS volumes to be attached as ReadOnly. 2018-05-28 16:24:19 +02:00
Kubernetes Submit Queue 9872a0502b
Merge pull request #64288 from gnufied/take-volume-resize-beta
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Move volume resize feature to beta

Move volume resizing feature to beta. 

xref https://github.com/kubernetes/features/issues/284

```release-note
Move Volume expansion to Beta
```
2018-05-26 01:34:17 -07:00
Hemant Kumar 0dd6e75567 Move volume resizing to beta
Update bootstrap policies
2018-05-25 15:32:38 -04:00
Kubernetes Submit Queue a8cf18c0ae
Merge pull request #63232 from lichuqiang/provision_plumbing
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Volume topology aware dynamic provisioning: basic plumbing

**What this PR does / why we need it**:

Split PR https://github.com/kubernetes/kubernetes/pull/63193 for better review
part 1: basic scheduler and controller plumbing

Next: https://github.com/kubernetes/kubernetes/pull/63233

**Which issue(s) this PR fixes** 
Feature: https://github.com/kubernetes/features/issues/561
Design: https://github.com/kubernetes/community/issues/2168

**Special notes for your reviewer**:
/sig storage
/sig scheduling
/assign @msau42 @jsafrane @saad-ali @bsalamat


**Release note**:

```release-note
Basic plumbing for volume topology aware dynamic provisioning
```
2018-05-25 07:58:53 -07:00
lichuqiang 95b530366a Add dynamic provisioning process 2018-05-24 17:12:38 +08:00
xuzhonghu 5caf141650 resourcequota return StatusError when timeout 2018-05-24 16:35:19 +08:00
David Eads 092714ea0f switch rbac to external 2018-05-22 08:17:05 -04:00
Kubernetes Submit Queue f86ec3f764
Merge pull request #63992 from mikedanese/owners
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

add mikedanese as an approver in various auth related directories

matching the [subprojects](https://docs.google.com/document/d/1RJvnSPOJ3JC61gerCpCpaCtzQjRcsZ2tXkcyokr6sLY/edit) I work on.



```release-note
NONE
```
2018-05-17 15:47:33 -07:00
Mike Danese f39ec8b333 add myself as an approver in various auth related directories
matching the subprojects I work on:

https://docs.google.com/document/d/1RJvnSPOJ3JC61gerCpCpaCtzQjRcsZ2tXkcyokr6sLY/edit
2018-05-17 11:32:37 -07:00
Kubernetes Submit Queue b3837d004a
Merge pull request #63469 from wojtek-t/allow_list_and_watch_secrets
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Allow for listing & watching individual secrets from nodes

This PR:
- propagates value of `metadata.name` field from fieldSelector to `name` field in RequestInfo (for list and watch requests)
- authorizes list/watch for requests for single secrets/configmaps coming from nodes

As an example:
```
/api/v1/secrets/namespaces/ns?fieldSelector=metadata.name=foo =>
  requestInfo.Name = "foo",
  requestInfo.Verb = "list"
/api/v1/secrets/namespaces/ns?fieldSelector=metadata.name=foo&watch=true =>
  requestInfo.Name = "foo",
  requestInfo.Verb = "list"
```

```release-note
list/watch API requests with a fieldSelector that specifies `metadata.name` can now be authorized as requests for an individual named resource
```
2018-05-17 07:09:43 -07:00
Jordan Liggitt 15bcfd5e00
Prevent nodes from updating taints 2018-05-15 13:54:33 -04:00
wojtekt b2500d41e9 Fix bootstrap roles to allow list/watch secrets/configmaps from nodes 2018-05-15 14:19:21 +02:00
wojtekt f344c5c062 Requires single name for list and watch 2018-05-15 14:19:21 +02:00
Jordan Liggitt 736f5e2349
Revert "authz: nodes should not be able to delete themselves"
This reverts commit 35de82094a.
2018-05-11 09:37:21 -04:00
Jordan Liggitt 8161033be4
Make node restriction admission pod lookups use an informer 2018-05-10 07:53:46 -04:00
Kubernetes Submit Queue b2fe2a0a6d
Merge pull request #59847 from mtaufen/dkcfg-explicit-keys
Automatic merge from submit-queue (batch tested with PRs 63624, 59847). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

explicit kubelet config key in Node.Spec.ConfigSource.ConfigMap

This makes the Kubelet config key in the ConfigMap an explicit part of
the API, so we can stop using magic key names.
    
As part of this change, we are retiring ConfigMapRef for ConfigMap.


```release-note
You must now specify Node.Spec.ConfigSource.ConfigMap.KubeletConfigKey when using dynamic Kubelet config to tell the Kubelet which key of the ConfigMap identifies its config file.
```
2018-05-09 17:55:13 -07:00
Michael Taufen c41cf55a2c explicit kubelet config key in Node.Spec.ConfigSource.ConfigMap
This makes the Kubelet config key in the ConfigMap an explicit part of
the API, so we can stop using magic key names.

As part of this change, we are retiring ConfigMapRef for ConfigMap.
2018-05-08 15:37:26 -07:00
David Eads c5445d3c56 simplify api registration 2018-05-08 18:33:50 -04:00
David Eads 7b4f97aca3 generated 2018-05-08 18:32:44 -04:00
Slava Semushin f49a0fbd5f Replace UserIDRange/GroupIDRange by IDRange in internal type to reduce difference with external type.
We had IDRange in both types prior 9440a68744 commit that splitted it
into UserIDRange/GroupIDRange. Later, in c91a12d205 commit we had to
revert this changes because they broke backward compatibility but
UserIDRange/GroupIDRange struct left in the internal type.

This commit removes these leftovers and reduces the differences
between internal and external types.
2018-05-04 18:31:42 +02:00
David Eads 1f4f22f72d don't block creation on lack of delete powers 2018-05-03 12:04:04 -04:00
Kubernetes Submit Queue b5f61ac129
Merge pull request #62657 from matthyx/master
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Update all script shebangs to use /usr/bin/env interpreter instead of /bin/interpreter

This is required to support systems where bash doesn't reside in /bin (such as NixOS, or the *BSD family) and allow users to specify a different interpreter version through $PATH manipulation.
https://www.cyberciti.biz/tips/finding-bash-perl-python-portably-using-env.html
```release-note
Use /usr/bin/env in all script shebangs to increase portability.
```
2018-05-02 19:44:32 -07:00
Jordan Liggitt ff8cdabfd4
Maintain index of high-cardinality edges in node authorizer graph 2018-05-02 16:05:28 -04:00
Jordan Liggitt ad7d5505b9
clean up vertex/edge deletion 2018-05-02 15:39:50 -04:00
David Eads 9a48066749 update restmapping to indicate fully qualified resource 2018-05-01 16:34:49 -04:00
David Eads ef0d1ab819 remove incorrect static restmapper 2018-05-01 07:51:17 -04:00
Kubernetes Submit Queue 2716de27b1
Merge pull request #56568 from zouyee/sync
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

sync code from copy destination

**What this PR does / why we need it**:
sync code from copy destination

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:


**Special notes for your reviewer**:

**Release note**:

```
NONE

```
2018-04-28 18:26:38 -07:00
Kubernetes Submit Queue 55f17933f5
Merge pull request #60741 from zlabjp/optional-subjects
Automatic merge from submit-queue (batch tested with PRs 60890, 63244, 60741, 63254). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Indicate clusterrolebinding, rolebinding subjects are optional fields

**What this PR does / why we need it**: With this PR, clusterrolebinding and rolebinding subjects are marked optional instead of required. Currently we cannot create clusterrolebinding and rolebinding with subjects are empty using `kubectl create/apply/replace -f`.

```
$ kubectl create rolebinding test --clusterrole view
rolebinding "test" created
$ kubectl get rolebinding test -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  creationTimestamp: 2018-03-02T06:58:16Z
  name: test
  namespace: default
  resourceVersion: "5606612"
  selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/default/rolebindings/test
  uid: 155c5c29-1de7-11e8-9f6f-fa163ec89f2a
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: view
subjects: null
$ kubectl get rolebinding test -o yaml | kubectl replace -f -
error: error validating "STDIN": error validating data: ValidationError(RoleBinding): missing required field "subjects" in io.k8s.api.rbac.v1.RoleBinding; if you choose to ignore these errors, turn validation off with --validate=false
```

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**: This is a same issue with https://github.com/kubernetes/kubernetes/issues/59403. /cc @liggitt 

**Release note**:

```release-note
NONE
```
2018-04-27 17:43:11 -07:00
Kubernetes Submit Queue dd5f030b02
Merge pull request #63165 from deads2k/api-08-kubeapiversion
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Remove KUBE_API_VERSIONS

Fixes https://github.com/kubernetes/kubernetes/issues/63102

KUBE_API_VERSIONS is an attempt to control the available serialization of types. It pre-dates the idea that we'll have separate schemes, so it's not a thing that makes sense anymore.

Server-side we've had a very clear message about breaks in the logs for a year "KUBE_API_VERSIONS is only for testing. Things will break.".

Client-side it became progressively more broken as we moved to generic types for CRUD more than a year ago. What is registered doesn't matter when everything is unstructured.

We should remove this piece of legacy since it doesn't behave predictable server-side or client-side.

@smarterclayton @lavalamp
@kubernetes/sig-api-machinery-bugs 

```release-note
KUBE_API_VERSIONS is no longer respected.  It was used for testing, but runtime-config is the proper flag to set.
```
2018-04-26 08:22:36 -07:00
David Eads a68c57155e remove KUBE_API_VERSIONS 2018-04-26 08:27:49 -04:00
Kubernetes Submit Queue becee4c12e
Merge pull request #59367 from colemickens/ptr-flake
Automatic merge from submit-queue (batch tested with PRs 59367, 60007). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

podtolerationrestriction: fix informer race in test

**What this PR does / why we need it**: This fixes test flakes in the PodTolerationRestriction admission controller unit tests. They seem to pass most of the time currently, but modifications I was making for #58818 changed timing and caused it to constantly break.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*: n/a

**Special notes for your reviewer**: n/a

Sending this as a one-off because the changes for both of the admin controllers in #58818 require additional discussion. Thanks to @ericchiang for finding it and authoring the commit; I just rebased and sent the PR.

```release-note
NONE
```
2018-04-26 00:43:07 -07:00
David Eads e931158128 generated 2018-04-25 09:02:32 -04:00
David Eads e7fbbe0e3c eliminate indirection from type registration 2018-04-25 09:02:31 -04:00
Kubernetes Submit Queue 15b61bc006
Merge pull request #62818 from mikedanese/selfdelete
Automatic merge from submit-queue (batch tested with PRs 62590, 62818, 63015, 62922, 63000). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

authz: nodes should not be able to delete themselves

@kubernetes/sig-auth-pr-reviews 

```release-note
kubelets are no longer allowed to delete their own Node API object. Prior to 1.11, in rare circumstances related to cloudprovider node ID changes, kubelets would attempt to delete/recreate their Node object at startup. If a legacy kubelet encounters this situation, a cluster admin can remove the Node object:
* `kubectl delete node/<nodeName>`
or grant self-deletion permission explicitly:
* `kubectl create clusterrole self-deleting-nodes --verb=delete --resource=nodes`
* `kubectl create clusterrolebinding self-deleting-nodes --clusterrole=self-deleting-nodes --group=system:nodes`
```
2018-04-24 14:22:13 -07:00
Kubernetes Submit Queue f0b207df2d
Merge pull request #62856 from liggitt/node-authorizer-contention-benchmark
Automatic merge from submit-queue (batch tested with PRs 62409, 62856). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add node authorizer contention benchmark

* Makes the node authorization benchmark run in parallel
* Runs the tests a second time with a background goroutine pushing graph modifications at a rate of 100x per second (to test authorization performance with contention on the graph lock).

Graph modifications come from the informers watching objects relevant to node authorization, and only fire when a relevant change is made (for example, most node updates do not trigger a graph modification, only ones which change the node's config source configmap reference; most pod updates do not trigger a graph modification, only ones that set the pod's nodeName or uid)

The results do not indicate bottlenecks in the authorizer, even under higher-than-expected write contention.

```
$ go test ./plugin/pkg/auth/authorizer/node/ -run foo -bench 'Authorization' -benchmem -v
goos: darwin
goarch: amd64
pkg: k8s.io/kubernetes/plugin/pkg/auth/authorizer/node
BenchmarkAuthorization/allowed_node_configmap-8                                 596 ns/op   529 B/op   11 allocs/op    3000000
BenchmarkAuthorization/allowed_configmap-8                                      609 ns/op   529 B/op   11 allocs/op    3000000
BenchmarkAuthorization/allowed_secret_via_pod-8                                 586 ns/op   529 B/op   11 allocs/op    3000000
BenchmarkAuthorization/allowed_shared_secret_via_pod-8                        18202 ns/op   542 B/op   11 allocs/op     100000
BenchmarkAuthorization/disallowed_node_configmap-8                              900 ns/op   691 B/op   17 allocs/op    2000000
BenchmarkAuthorization/disallowed_configmap-8                                   868 ns/op   693 B/op   17 allocs/op    2000000
BenchmarkAuthorization/disallowed_secret_via_pod-8                              875 ns/op   693 B/op   17 allocs/op    2000000
BenchmarkAuthorization/disallowed_shared_secret_via_pvc-8                      1215 ns/op   948 B/op   22 allocs/op    1000000
BenchmarkAuthorization/disallowed_pvc-8                                         912 ns/op   693 B/op   17 allocs/op    2000000
BenchmarkAuthorization/disallowed_pv-8                                         1137 ns/op   834 B/op   19 allocs/op    2000000
BenchmarkAuthorization/disallowed_attachment_-_no_relationship-8                892 ns/op   677 B/op   16 allocs/op    2000000
BenchmarkAuthorization/disallowed_attachment_-_feature_disabled-8               236 ns/op   208 B/op    2 allocs/op   10000000
BenchmarkAuthorization/allowed_attachment_-_feature_enabled-8                   723 ns/op   593 B/op   12 allocs/op    2000000

BenchmarkAuthorization/contentious_allowed_node_configmap-8                     726 ns/op   529 B/op   11 allocs/op    2000000
BenchmarkAuthorization/contentious_allowed_configmap-8                          698 ns/op   529 B/op   11 allocs/op    2000000
BenchmarkAuthorization/contentious_allowed_secret_via_pod-8                     778 ns/op   529 B/op   11 allocs/op    2000000
BenchmarkAuthorization/contentious_allowed_shared_secret_via_pod-8            21406 ns/op   638 B/op   13 allocs/op     100000
BenchmarkAuthorization/contentious_disallowed_node_configmap-8                 1135 ns/op   692 B/op   17 allocs/op    1000000
BenchmarkAuthorization/contentious_disallowed_configmap-8                      1239 ns/op   691 B/op   17 allocs/op    1000000
BenchmarkAuthorization/contentious_disallowed_secret_via_pod-8                 1043 ns/op   692 B/op   17 allocs/op    1000000
BenchmarkAuthorization/contentious_disallowed_shared_secret_via_pvc-8          1404 ns/op   950 B/op   22 allocs/op    1000000
BenchmarkAuthorization/contentious_disallowed_pvc-8                            1177 ns/op   693 B/op   17 allocs/op    1000000
BenchmarkAuthorization/contentious_disallowed_pv-8                             1295 ns/op   834 B/op   19 allocs/op    1000000
BenchmarkAuthorization/contentious_disallowed_attachment_-_no_relationship-8   1170 ns/op   676 B/op   16 allocs/op    1000000
BenchmarkAuthorization/contentious_disallowed_attachment_-_feature_disabled-8   262 ns/op   208 B/op    2 allocs/op   10000000
BenchmarkAuthorization/contentious_allowed_attachment_-_feature_enabled-8       790 ns/op   593 B/op   12 allocs/op    2000000

--- BENCH: BenchmarkAuthorization
   node_authorizer_test.go:592: graph modifications during non-contention test: 0
   node_authorizer_test.go:589: graph modifications during contention test: 6301
   node_authorizer_test.go:590: <1ms=5507, <10ms=128, <25ms=43, <50ms=65, <100ms=135, <250ms=328, <500ms=93, <1000ms=2, >1000ms=0
PASS
ok     k8s.io/kubernetes/plugin/pkg/auth/authorizer/node   112.616s
```

```release-note
NONE
```
2018-04-23 01:35:14 -07:00
Pavel Pospisil d3ddf7eb8b Always Start pvc-protection-controller and pv-protection-controller
After K8s 1.10 is upgraded to K8s 1.11 finalizer [kubernetes.io/pvc-protection] is added to PVCs
because StorageObjectInUseProtection feature will be GA in K8s 1.11.
However, when K8s 1.11 is downgraded to K8s 1.10 and the StorageObjectInUseProtection feature is disabled
the finalizers remain in the PVCs and as pvc-protection-controller is not started in K8s 1.10 finalizers
are not removed automatically from deleted PVCs and that's why deleted PVC are not removed from the system
but remain in Terminating phase.
The same applies to pv-protection-controller and [kubernetes.io/pvc-protection] finalizer in PVs.

That's why pvc-protection-controller is always started because the pvc-protection-controller removes finalizers
from PVCs automatically when a PVC is not in active use by a pod.
Also the pv-protection-controller is always started to remove finalizers from PVs automatically when a PV is not
Bound to a PVC.

Related issue: https://github.com/kubernetes/kubernetes/issues/60764
2018-04-20 19:54:50 +02:00
Mike Danese 35de82094a authz: nodes should not be able to delete themselves 2018-04-20 10:22:07 -07:00
Kubernetes Submit Queue fc7527537f
Merge pull request #62336 from deads2k/rbac-05-scale
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

add statefulset scaling permission to admins, editors, and viewers

StatefulSets are missing scale permissions, so users can't scale them.


```release-note
fix permissions to allow statefulset scaling for admins, editors, and viewers
```
2018-04-20 05:31:11 -07:00
Jordan Liggitt 1c6998a2f3
Add node authorizer contention benchmark 2018-04-19 23:11:54 -04:00