Automatic merge from submit-queue (batch tested with PRs 41714, 41510, 42052, 41918, 31515)
Switch scheduler to use generated listers/informers
Where possible, switch the scheduler to use generated listers and
informers. There are still some places where it probably makes more
sense to use one-off reflectors/informers (listing/watching just a
single node, listing/watching scheduled & unscheduled pods using a field
selector).
I think this can wait until master is open for 1.7 pulls, given that we're close to the 1.6 freeze.
After this and #41482 go in, the only code left that references legacylisters will be federation, and 1 bit in a stateful set unit test (which I'll clean up in a follow-up).
@resouer I imagine this will conflict with your equivalence class work, so one of us will be doing some rebasing 😄
cc @wojtek-t @gmarek @timothysc @jayunit100 @smarterclayton @deads2k @liggitt @sttts @derekwaynecarr @kubernetes/sig-scheduling-pr-reviews @kubernetes/sig-scalability-pr-reviews
Automatic merge from submit-queue
redact detailed errors from healthz and expose in default policy
Makes `/healthz` less sensitive and exposes it by default.
@kubernetes/sig-auth-pr-reviews @kubernetes/sig-api-machinery-misc @liggitt
Automatic merge from submit-queue (batch tested with PRs 41667, 41820, 40910, 41645, 41361)
Switch admission to use shared informers
Originally part of #40097
cc @smarterclayton @derekwaynecarr @deads2k @liggitt @sttts @gmarek @wojtek-t @timothysc @lavalamp @kubernetes/sig-scalability-pr-reviews @kubernetes/sig-api-machinery-pr-reviews
Automatic merge from submit-queue
move kube-dns to a separate service account
Switches the kubedns addon to run as a separate service account so that we can subdivide RBAC permission for it. The RBAC permissions will need a little more refinement which I'm expecting to find in https://github.com/kubernetes/kubernetes/pull/38626 .
@cjcullen @kubernetes/sig-auth since this is directly related to enabling RBAC with subdivided permissions
@thockin @kubernetes/sig-network since this directly affects now kubedns is added.
```release-note
`kube-dns` now runs using a separate `system:serviceaccount:kube-system:kube-dns` service account which is automatically bound to the correct RBAC permissions.
```
Where possible, switch the scheduler to use generated listers and
informers. There are still some places where it probably makes more
sense to use one-off reflectors/informers (listing/watching just a
single node, listing/watching scheduled & unscheduled pods using a field
selector).
Automatic merge from submit-queue (batch tested with PRs 41812, 41665, 40007, 41281, 41771)
kube-apiserver: add a bootstrap token authenticator for TLS bootstrapping
Follows up on https://github.com/kubernetes/kubernetes/pull/36101
Still needs:
* More tests.
* To be hooked up to the API server.
- Do I have to do that in a separate PR after k8s.io/apiserver is synced?
* Docs (kubernetes.io PR).
* Figure out caching strategy.
* Release notes.
cc @kubernetes/sig-auth-api-reviews @liggitt @luxas @jbeda
```release-notes
Added a new secret type "bootstrap.kubernetes.io/token" for dynamically creating TLS bootstrapping bearer tokens.
```
Automatic merge from submit-queue
Add scheduler predicate to filter for max Azure disks attached
**What this PR does / why we need it**: This PR adds scheduler predicates for maximum Azure Disks count. This allows to use the environment variable KUBE_MAX_PD_VOLS on scheduler the same as it's already possible with GCE and AWS.
This is needed as we need a way to specify the maximum attachable disks on Azure to avoid permanently failing disk attachment in cases k8s scheduled too many PODs with AzureDisk volumes onto the same node.
I've chosen 16 as the default value for DefaultMaxAzureDiskVolumes even though it may be too high for many smaller VM types and too low for the larger VM types. This means, the default behavior may change for clusters with large VM types. For smaller VM types, the behavior will not change (it will keep failing attaching).
In the future, the value should be determined at run time on a per node basis, depending on the VM size. I know that this is already implemented in the ongoing Azure Managed Disks work, but I don't remember where to find this anymore and also forgot who was working on this. Maybe @colemickens can help here.
**Release note**:
```release-note
Support KUBE_MAX_PD_VOLS on Azure
```
CC @colemickens @brendandburns
Automatic merge from submit-queue (batch tested with PRs 41421, 41440, 36765, 41722)
ResourceQuota ability to support default limited resources
Add support for the ability to configure the quota system to identify specific resources that are limited by default. A limited resource means its consumption is denied absent a covering quota. This is in contrast to the current behavior where consumption is unlimited absent a covering quota. Intended use case is to allow operators to restrict consumption of high-cost resources by default.
Example configuration:
**admission-control-config-file.yaml**
```
apiVersion: apiserver.k8s.io/v1alpha1
kind: AdmissionConfiguration
plugins:
- name: "ResourceQuota"
configuration:
apiVersion: resourcequota.admission.k8s.io/v1alpha1
kind: Configuration
limitedResources:
- resource: pods
matchContains:
- pods
- requests.cpu
- resource: persistentvolumeclaims
matchContains:
- .storageclass.storage.k8s.io/requests.storage
```
In the above configuration, if a namespace lacked a quota for any of the following:
* cpu
* any pvc associated with particular storage class
The attempt to consume the resource is denied with a message stating the user has insufficient quota for the matching resources.
```
$ kubectl create -f pvc-gold.yaml
Error from server: error when creating "pvc-gold.yaml": insufficient quota to consume: gold.storageclass.storage.k8s.io/requests.storage
$ kubectl create quota quota --hard=gold.storageclass.storage.k8s.io/requests.storage=10Gi
$ kubectl create -f pvc-gold.yaml
... created
```
Automatic merge from submit-queue
Make controller-manager resilient to stale serviceaccount tokens
Now that the controller manager is spinning up controller loops using service accounts, we need to be more proactive in making sure the clients will actually work.
Future additional work:
* make a controller that reaps invalid service account tokens (c.f. https://github.com/kubernetes/kubernetes/issues/20165)
* allow updating the client held by a controller with a new token while the controller is running (c.f. https://github.com/kubernetes/kubernetes/issues/4672)
Automatic merge from submit-queue
Adjust nodiskconflict support based on iscsi multipath.
With the multipath support is in place, to declare whether both iscsi disks are same, we need to only depend on IQN.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Automatic merge from submit-queue
Improved code coverage for plugin/pkg/scheduler/algorithm/priorities…
…/most_requested.go
**What this PR does / why we need it**:
Part of #39559 , code coverage improved from 70+% to 80+%
Automatic merge from submit-queue (batch tested with PRs 39373, 41585, 41617, 41707, 39958)
Feature-Gate affinity in annotations
**What this PR does / why we need it**:
Adds back basic flaggated support for alpha Affinity annotations
**Special notes for your reviewer**:
Reconcile function is placed in the lowest common denominator, which in this case is schedulercache, because you can't place flag-gated functions in apimachinery.
**Release note**:
```
NONE
```
/cc @davidopp
Automatic merge from submit-queue (batch tested with PRs 41043, 39058, 41021, 41603, 41414)
add defaultTolerationSeconds admission controller
**What this PR does / why we need it**:
Splited from #34825, add a new admission-controller that
1. adds toleration (with tolerationSeconds = 300) for taint `notReady:NoExecute` to every pod that does not already have a toleration for that taint, and
2. adds toleration (with tolerationSeconds = 300) for taint `unreachable:NoExecute` to every pod that does not already have a toleration for that taint.
**Which issue this PR fixes**:
Related issue: #1574
Related PR: #34825
**Special notes for your reviewer**:
**Release note**:
```release-note
add defaultTolerationSeconds admission controller
```
Automatic merge from submit-queue (batch tested with PRs 41401, 41195, 41664, 41521, 41651)
Remove default failure domains from anti-affinity feature
Removing it is necessary to make performance of this feature acceptable at some point.
With default failure domains (or in general when multiple topology keys are possible), we don't have transitivity between node belonging to a topology. And without this, it's pretty much impossible to solve this effectively.
@timothysc
Automatic merge from submit-queue (batch tested with PRs 37137, 41506, 41239, 41511, 37953)
Add field to control service account token automounting
Fixes https://github.com/kubernetes/kubernetes/issues/16779
* adds an `automountServiceAccountToken *bool` field to `ServiceAccount` and `PodSpec`
* if set in both the service account and pod, the pod wins
* if unset in both the service account and pod, we automount for backwards compatibility
```release-note
An `automountServiceAccountToken *bool` field was added to ServiceAccount and PodSpec objects. If set to `false` on a pod spec, no service account token is automounted in the pod. If set to `false` on a service account, no service account token is automounted for that service account unless explicitly overridden in the pod spec.
```
Automatic merge from submit-queue
Switch resourcequota controller to shared informers
Originally part of #40097
I have had some issues with this change in the past, when I updated `pkg/quota` to use the new informers while `pkg/controller/resourcequota` remained on the old informers. In this PR, both are switched to using the new informers. The issues in the past were lots of flakey test failures in the ResourceQuota e2es, where it would randomly fail to see deletions and handle replenishment. I am hoping that now that everything here is consistently using the new informers, there won't be any more of these flakes, but it's something to keep an eye out for.
I also think `pkg/controller/resourcequota` could be cleaned up. I don't think there's really any need for `replenishment_controller.go` any more since it's no longer running individual controllers per kind to replenish. It instead just uses the shared informer and adds event handlers to it. But maybe we do that in a follow up.
cc @derekwaynecarr @smarterclayton @wojtek-t @deads2k @sttts @liggitt @timothysc @kubernetes/sig-scalability-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 40297, 41285, 41211, 41243, 39735)
Secure kube-scheduler
This PR:
* Adds a bootstrap `system:kube-scheduler` clusterrole
* Adds a bootstrap clusterrolebinding to the `system:kube-scheduler` user
* Sets up a kubeconfig for kube-scheduler on GCE (following the controller-manager pattern)
* Switches kube-scheduler to running with kubeconfig against secured port (salt changes, beware)
* Removes superuser permissions from kube-scheduler in local-up-cluster.sh
* Adds detailed RBAC deny logging
```release-note
On kube-up.sh clusters on GCE, kube-scheduler now contacts the API on the secured port.
```
Automatic merge from submit-queue (batch tested with PRs 41378, 41413, 40743, 41155, 41385)
Reconcile bootstrap clusterroles on server start
Currently, on server start, bootstrap roles and bindings are only created if there are no existing roles or rolebindings.
Instead, we should look at each bootstrap role and rolebinding, and ensure it exists and has required permissions and subjects at server start. This allows seamless upgrades to new versions that define roles for new controllers, or add permissions to existing roles.
```release-note
Default RBAC ClusterRole and ClusterRoleBinding objects are automatically updated at server start to add missing permissions and subjects (extra permissions and subjects are left in place). To prevent autoupdating a particular role or rolebinding, annotate it with `rbac.authorization.kubernetes.io/autoupdate=false`.
```
Automatic merge from submit-queue (batch tested with PRs 41378, 41413, 40743, 41155, 41385)
'core' package to prevent dependency creep and isolate core functiona…
**What this PR does / why we need it**:
Solves these two problems:
- Top level Scheduler root directory has several files in it that are needed really by the factory and algorithm implementations. Thus they should be subpackages of scheduler.
- In addition scheduler.go and generic_scheduler.go don't naturally differentiate themselves when they are in the same package. scheduler.go is eseentially the daemon entry point and so it should be isolated from the core
*No release note needed*
Automatic merge from submit-queue (batch tested with PRs 41382, 41407, 41409, 41296, 39636)
Update to use proxy subresource consistently
Proxy subresources have been in place since 1.2.0 and improve the ability to put policy in place around proxy access.
This PR updates the last few clients to use proxy subresources rather than the root proxy
Automatic merge from submit-queue (batch tested with PRs 41357, 41178, 41280, 41184, 41278)
Switch RBAC subject apiVersion to apiGroup in v1beta1
Referencing a subject from an RBAC role binding, the API group and kind of the subject is needed to fully-qualify the reference.
The version is not, and adds complexity around re-writing the reference when returning the binding from different versions of the API, and when reconciling subjects.
This PR:
* v1beta1: change the subject `apiVersion` field to `apiGroup` (to match roleRef)
* v1alpha1: convert apiVersion to apiGroup for backwards compatibility
* all versions: add defaulting for the three allowed subject kinds
* all versions: add validation to the field so we can count on the data in etcd being good until we decide to relax the apiGroup restriction
```release-note
RBAC `v1beta1` RoleBinding/ClusterRoleBinding subjects changed `apiVersion` to `apiGroup` to fully-qualify a subject. ServiceAccount subjects default to an apiGroup of `""`, User and Group subjects default to an apiGroup of `"rbac.authorization.k8s.io"`.
```
@deads2k @kubernetes/sig-auth-api-reviews @kubernetes/sig-auth-pr-reviews
Automatic merge from submit-queue
give nodes update/delete permissions
delete permission is logically paired with create permission (and is used during self-registration scenarios when a node has been restarted and an existing node object has a mismatched externalID)
we already need to scope update nodes/status permission to only let a node update itself, and we would scope these at the same time.
fixes https://github.com/kubernetes/kubernetes/issues/41224