Automatic merge from submit-queue
getting emailAddress from TLS cert
Kubernetes if using TLS cert to perform authentication will use the CommonName field of the cert as the authenticating user. In https://github.com/kubernetes/kubernetes/blob/master/plugin/pkg/auth/authenticator/request/x509/x509.go#L106, alternative methods are defined to use emailAddress or DNSName as the authenticating user. The method that uses the emailAddress is not comprehensive as this information can be encoded in different places of the certificate. This PR fixes this.
Automatic merge from submit-queue
oidc auth plugin: don't hard fail if provider is unavailable
When using OpenID Connect authentication, don't cause the API
server to fail if the provider is unavailable. This allows
installations to run OpenID Connect providers after starting the
API server, a common case when the provider is running on the
cluster itself.
Errors are now deferred to the authenticate method.
cc @sym3tri @erictune @aaronlevy @kubernetes/sig-auth
When using OpenID Connect authentication, don't cause the API
server to fail if the provider is unavailable. This allows
installations to run OpenID Connect providers after starting the
API server, a common case when the provider is running on the
cluster itself.
Errors are now deferred to the authenticate method.
Automatic merge from submit-queue
Add error log for Run function in server.go
When clientcmd.BuildConfigFromFlags and os.Hostname return error, there are no log information in Run function, neither did the upper function(main of scheduler), add it, I suggest.
Automatic merge from submit-queue
scheduler: change phantom pod test from integration into unit test
This is an effort for #24440.
Why this PR?
- Integration test is hard to debug. We could model the test as a unit test similar to [TestSchedulerForgetAssumedPodAfterDelete()](132ebb091a/plugin/pkg/scheduler/scheduler_test.go (L173)). Currently the test is testing expiring case, we can change that to delete.
- Add a test similar to TestSchedulerForgetAssumedPodAfterDelete() to test phantom pod.
- refactor scheduler tests to share the code between TestSchedulerNoPhantomPodAfterExpire() and TestSchedulerNoPhantomPodAfterDelete()
- Decouple scheduler tests from scheduler events: not to use events
Automatic merge from submit-queue
Track object modifications in fake clientset
Fake clientset is used by unit tests extensively but it has some
shortcomings:
- no filtering on namespace and name: tests that want to test objects in
multiple namespaces end up getting all objects from this clientset,
as it doesn't perform any filtering based on name and namespace;
- updates and deletes don't modify the clientset state, so some tests
can get unexpected results if they modify/delete objects using the
clientset;
- it's possible to insert multiple objects with the same
kind/name/namespace, this leads to confusing behavior, as retrieval is
based on the insertion order, but anchors on the last added object as
long as no more objects are added.
This change changes core.ObjectRetriever implementation to track object
adds, updates and deletes.
Some unit tests were depending on the previous (and somewhat incorrect)
behavior. These are fixed in the following few commits.
Automatic merge from submit-queue
[Refactor] Make QoS naming consistent across the codebase
@derekwaynecarr @vishh PTAL. Can one of you please attach a LGTM.
Fake clientset no longer needs to be prepopulated with records: keeping
them in leads to the name conflict on creates. Also, since fake
clientset now respects namespaces, we need to correctly populate them.
Automatic merge from submit-queue
Omit invalid affinity error in admission
Fixes#27645 cc @smarterclayton
Not sure if this is too aggressive, but user should expect failure if they disable validation after all.
Automatic merge from submit-queue
[Refactor] QOS to have QOS Class type for QoS classes
This PR adds a QOSClass type and initializes QOSclass constants for the three QoS classes.
It would be good to use this in all future QOS related features.
This would be good to have for the (Pod level cgroups isolation proposal)[https://github.com/kubernetes/kubernetes/pull/26751] that i am working on aswell.
@vishh PTAL
Signed-off-by: Buddha Prakash <buddhap@google.com>
Automatic merge from submit-queue
Fix #25606: Add the length detection of the "predicateFuncs" in generic_scheduler.go
Fix #25606
The PR add the length detection of the "predicateFuncs" for "findNodesThatFit" function of generic_scheduler.go.
In “findNodesThatFit” function, if the length of the "predicateFuncs" parameter is 0, it can set filtered equals nodes.Items, and needn't to traverse the nodes.Items.
Automatic merge from submit-queue
plugin/pkg/auth/authorizer/webhook: log request errors
Currently the API server only checks the errors returned by an
authorizer plugin, it doesn't return or log them[0]. This makes
incorrectly configuring the wehbook authorizer plugin extremely
difficult to debug.
Add a logging statement if the request to the remove service fails
as this indicates misconfiguration.
[0] https://goo.gl/9zZFv4
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/24678)
<!-- Reviewable:end -->
Automatic merge from submit-queue
golint fixes for AWS cloudprovider
Among other things, golint doesn't like receivers that are inconsistently named or called "self". Or structs named aws.AWSservices, aws.AWSCloud, etc.
Automatic merge from submit-queue
Rephrase 'pv not found in cache' warnings.
When kubelet starts a pod that refers to non-existing PV, PVC or Node, it should clearly show that the requested element does not exist.
Previous `PersistentVolumeClaim 'default/ceph-claim-wm' is not in cache` looks like random kubelet hiccup, while `PersistentVolumeClaim 'default/ceph-claim-wm' not found` suggests that the object may not exist at all and it might be an user error.
Fixes#27523
Automatic merge from submit-queue
AWS/GCE: Spread PetSet volume creation across zones, create GCE volumes in non-master zones
Long term we plan on integrating this into the scheduler, but in the
short term we use the volume name to place it onto a zone.
We hash the volume name so we don't bias to the first few zones.
If the volume name "looks like" a PetSet volume name (ending with
-<number>) then we use the number as an offset. In that case we hash
the base name.
When kubelet starts a pod that refers to non-existing PV, PVC or Node, it
should clearly show that the requested element does not exist.
Previous "PersistentVolumeClaim 'default/ceph-claim-wm' is not in cache"
looks like random kubelet hiccup, while "PersistentVolumeClaim
'default/ceph-claim-wm' not found" suggests that the object may not exist at
all and it might be an user error.
Fixes#27523
Automatic merge from submit-queue
add unit and integration tests for rbac authorizer
This PR adds lots of tests for the RBAC authorizer.
The plan over the next couple days is to add a lot more test cases.
Updates #23396
cc @erictune
We had a long-lasting bug which prevented creation of volumes in
non-master zones, because the cloudprovider in the volume label
admission controller is not initialized with the multizone setting
(issue #27656).
This implements a simple workaround: if the volume is created with the
failure-domain zone label, we look for the volume in that zone. This is
more efficient, avoids introducing a new semantic, and allows users (and
the dynamic provisioner) to create volumes in non-master zones.
Fixes#27657
Automatic merge from submit-queue
Considering all nodes for the scheduler cache to allow lookups
Fixes the actual issue that led me to create https://github.com/kubernetes/kubernetes/issues/22554
Currently the nodes in the cache provided to the predicates excludes the unschedulable nodes using field level filtering for the watch results. This results in the above issue as the `ServiceAffinity` predicate uses the cached node list to look up the node metadata for a peer pod (another pod belonging to the same service). Since this peer pod could be currently hosted on a node that is currently unschedulable, the lookup could potentially fail, resulting in the pod failing to be scheduled.
As part of the fix, we are now including all nodes in the watch results and excluding the unschedulable nodes using `NodeCondition`
@derekwaynecarr PTAL