Automatic merge from submit-queue
nodeports usage should be part of LoadBalancer service type
Since a creation of Service of type LoadBalancer will allocate NodePorts as well, so it makes more sense to account for the NodePort usage in the LoadBalancer switch case.
check here: https://github.com/kubernetes/kubernetes/blob/master/pkg/registry/core/service/rest.go#L553 for the logic on whether it should assign a nodeport for the service.
Automatic merge from submit-queue
Curating Owners: pkg/quota
cc @vishh @derekwaynecarr
In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.
If You Care About the Process:
------------------------------
We did this by algorithmically figuring out who’s contributed code to
the project and in what directories. Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.
Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).
At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
Also, see https://github.com/kubernetes/contrib/issues/1389.
TLDR:
-----
As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.
3. Notify me if you want some OWNERS file to be removed. Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
Automatic merge from submit-queue
optimize conditions of ServiceReplenishmentUpdateFunc to replenish service
Originally, the replenishQuota method didn't focus on the third parameter object even if others transfered to it, i think the function is not efficient and perfect. then i use the third param to get MatchResources, it will be more exact. for example, if the old pod was quota tracked and the new was not, the replenishQuota only focus on usage resource of the old pod, still if the third parameter object is nil, the process will be same as before
Automatic merge from submit-queue
add union registry for quota
Adds the ability to combine multiple quota registries together. Kube needs this for other types.
@derekwaynecarr
Automatic merge from submit-queue
Add support to quota pvc storage requests
Adds support to quota cumulative `PersistentVolumeClaim` storage requests in a namespace.
Per our chat today @markturansky @abhgupta - this is not done (lacks unit testing), but is functional.
This lets quota enforcement for `PersistentVolumeClaim` to occur at creation time. Supporting bind time enforcement would require substantial more work. It's possible this is sufficient for many, so I am opening it up for feedback.
In the future, I suspect we may want to treat local disk in a special manner, but that would have to be a different resource altogether (i.e. `requests.disk`) or something.
Example quota:
```
apiVersion: v1
kind: ResourceQuota
metadata:
name: quota
spec:
hard:
persistentvolumeclaims: "10"
requests.storage: "40Gi"
```
/cc @kubernetes/rh-cluster-infra @deads2k
Automatic merge from submit-queue
Init container quota is inaccurate
Usage charged should be max of greater of init container or all regular
containers. Also, need to validate init container inputs
@derekwaynecarr
Callers are required to implement their interfaces, removes the
potential for mistakes. We have a reflective test
pkg/api/meta_test.go#TestAccessorImplementations that verifies that all
objects registered to the scheme properly implement their interfaces.
Automatic merge from submit-queue
ResourceQuota BestEffort scope aligned with Pod level QoS
This aligns quota with the changes in kubelet and CLI.
So if quota allows 10 `BestEffort` pods, it will now track properly with what the user sees with changes in 1.3.
```
apiVersion: v1
kind: ResourceQuota
metadata:
name: best-effort
spec:
hard:
pods: "10"
scopes:
- BestEffort
```
/cc @vishh @kubernetes/rh-cluster-infra
Automatic merge from submit-queue
Quota ignores pod compute resources on updates
Scenario:
1. define a quota Q that tracks memory and cpu
2. create pod P that uses memory=100Mi, cpu=100m
3. update pod P to use memory=50Mi,cpu=10m
Expected Results:
Step 3 should fail with validation error.
Quota Q should not have changed.
Actual Results:
Step 3 fails validation, but quota Q is decremented to have memory usage down 50Mi and cpu usage down 40m. This is because the quota was getting updated even though the pod was going to fail validation.
Fix:
Quota should only support modifying pod compute resources when pods themselves support modifying their compute resources.
This also fixes https://github.com/kubernetes/kubernetes/issues/24352
/cc @smarterclayton - this is what we discussed.
fyi: @kubernetes/rh-cluster-infra