Automatic merge from submit-queue
Rephrase 'pv not found in cache' warnings.
When kubelet starts a pod that refers to non-existing PV, PVC or Node, it should clearly show that the requested element does not exist.
Previous `PersistentVolumeClaim 'default/ceph-claim-wm' is not in cache` looks like random kubelet hiccup, while `PersistentVolumeClaim 'default/ceph-claim-wm' not found` suggests that the object may not exist at all and it might be an user error.
Fixes#27523
Automatic merge from submit-queue
AWS/GCE: Spread PetSet volume creation across zones, create GCE volumes in non-master zones
Long term we plan on integrating this into the scheduler, but in the
short term we use the volume name to place it onto a zone.
We hash the volume name so we don't bias to the first few zones.
If the volume name "looks like" a PetSet volume name (ending with
-<number>) then we use the number as an offset. In that case we hash
the base name.
When kubelet starts a pod that refers to non-existing PV, PVC or Node, it
should clearly show that the requested element does not exist.
Previous "PersistentVolumeClaim 'default/ceph-claim-wm' is not in cache"
looks like random kubelet hiccup, while "PersistentVolumeClaim
'default/ceph-claim-wm' not found" suggests that the object may not exist at
all and it might be an user error.
Fixes#27523
Automatic merge from submit-queue
add unit and integration tests for rbac authorizer
This PR adds lots of tests for the RBAC authorizer.
The plan over the next couple days is to add a lot more test cases.
Updates #23396
cc @erictune
We had a long-lasting bug which prevented creation of volumes in
non-master zones, because the cloudprovider in the volume label
admission controller is not initialized with the multizone setting
(issue #27656).
This implements a simple workaround: if the volume is created with the
failure-domain zone label, we look for the volume in that zone. This is
more efficient, avoids introducing a new semantic, and allows users (and
the dynamic provisioner) to create volumes in non-master zones.
Fixes#27657
Automatic merge from submit-queue
Considering all nodes for the scheduler cache to allow lookups
Fixes the actual issue that led me to create https://github.com/kubernetes/kubernetes/issues/22554
Currently the nodes in the cache provided to the predicates excludes the unschedulable nodes using field level filtering for the watch results. This results in the above issue as the `ServiceAffinity` predicate uses the cached node list to look up the node metadata for a peer pod (another pod belonging to the same service). Since this peer pod could be currently hosted on a node that is currently unschedulable, the lookup could potentially fail, resulting in the pod failing to be scheduled.
As part of the fix, we are now including all nodes in the watch results and excluding the unschedulable nodes using `NodeCondition`
@derekwaynecarr PTAL
Automatic merge from submit-queue
scheduler: remove unused random generator
The way scheduler selecting host has been changed to round-robin.
Clean up leftover.
Automatic merge from submit-queue
Add a NodeCondition "NetworkUnavaiable" to prevent scheduling onto a node until the routes have been created
This is new version of #26267 (based on top of that one).
The new workflow is:
- we have an "NetworkNotReady" condition
- Kubelet when it creates a node, it sets it to "true"
- RouteController will set it to "false" when the route is created
- Scheduler is scheduling only on nodes that doesn't have "NetworkNotReady ==true" condition
@gmarek @bgrant0607 @zmerlynn @cjcullen @derekwaynecarr @danwinship @dcbw @lavalamp @vishh
Automatic merge from submit-queue
reduce conflict retries
Eliminates quota admission conflicts due to latent caches on the same API server.
@derekwaynecarr
Automatic merge from submit-queue
plumb Update resthandler to allow old/new comparisons in admission
Rework how updated objects are passed to rest storage Update methods (first pass at https://github.com/kubernetes/kubernetes/pull/23928#discussion_r61444342)
* allows centralizing precondition checks (uid and resourceVersion)
* allows admission to have the old and new objects on patch/update operations (sets us up for field level authorization, differential quota updates, etc)
* allows patch operations to avoid double-GETting the object to apply the patch
Overview of important changes:
* pkg/api/rest/rest.go
* changes `rest.Update` interface to give rest storage an `UpdatedObjectInfo` interface instead of the object directly. To get the updated object, the storage must call `UpdatedObject()`, passing in the current object
* pkg/api/rest/update.go
* provides a default `UpdatedObjectInfo` impl
* passes a copy of the updated object through any provided transforming functions and returns it when asked
* builds UID preconditions from the updated object if they can be extracted
* pkg/apiserver/resthandler.go
* Reworks update and patch operations to give old objects to admission
* pkg/registry/generic/registry/store.go
* Calls `UpdatedObject()` inside `GuaranteedUpdate` so it can provide the old object
Todo:
- [x] Update rest.Update interface:
* Given the name of the object being updated
* To get the updated object data, the rest storage must pass the current object (fetched using the name) to an `UpdatedObject(ctx, oldObject) (newObject, error)` func. This is typically done inside a `GuaranteedUpdate` call.
- [x] Add old object to admission attributes interface
- [x] Update resthandler Update to move admission into the UpdatedObject() call
- [x] Update resthandler Patch to move the patch application and admission into the UpdatedObject() call
- [x] Add resttest tests to make sure oldObj is correctly passed to UpdatedObject(), and errors propagate back up
Follow-up:
* populate oldObject in admission for delete operations?
* update quota plugin to use `GetOldObject()` in admission attributes
* admission plugin to gate ownerReference modification on delete permission
* Decide how to handle preconditions (does that belong in the storage layer or in the resthander layer?)
Automatic merge from submit-queue
Introduce node memory pressure condition to scheduler
Following the work done by @derekwaynecarr at https://github.com/kubernetes/kubernetes/pull/21274, introducing memory pressure predicate for scheduler.
Missing:
* write down unit-test
* test the implementation
At the moment this is a heads up for further discussion how the new node's memory pressure condition should be handled in the generic scheduler.
**Additional info**
* Based on [1], only best effort pods are subject to filtering.
* Based on [2], best effort pods are those pods "iff requests & limits are not specified for any resource across all containers".
[1] 542668cc79/docs/proposals/kubelet-eviction.md (scheduler)
[2] https://github.com/kubernetes/kubernetes/pull/14943
Automatic merge from submit-queue
Use protobufs by default to communicate with apiserver (still store JSONs in etcd)
@lavalamp @kubernetes/sig-api-machinery
Automatic merge from submit-queue
Cache Webhook Authentication responses
Add a simple LRU cache w/ 2 minute TTL to the webhook authenticator.
Kubectl is a little spammy, w/ >= 4 API requests per command. This also prevents a single unauthenticated user from being able to DOS the remote authenticator.