Automatic merge from submit-queue
Cache Webhook Authentication responses
Add a simple LRU cache w/ 2 minute TTL to the webhook authenticator.
Kubectl is a little spammy, w/ >= 4 API requests per command. This also prevents a single unauthenticated user from being able to DOS the remote authenticator.
Automatic merge from submit-queue
Add NetworkPolicy API Resource
API implementation of https://github.com/kubernetes/kubernetes/pull/24154
Still to do:
- [x] Get it working (See comments)
- [x] Make sure user-facing comments are correct.
- [x] Update naming in response to #24154
- [x] kubectl / client support
- [x] Release note.
```release-note
Implement NetworkPolicy v1beta1 API object / client support.
```
Next Steps:
- UTs in separate PR.
- e2e test in separate PR.
- make `Ports` + `From` pointers to slices (TODOs in code - to be done when auto-gen is fixed)
CC @thockin
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
Automatic merge from submit-queue
Adding support objects for integrating dynamic client the kubectl builder
Kubectl will try to decode into `runtime.VersionedObjects`, so the `UnstructuredJSONScheme` needs to handle that intelligently.
Kubectl's builder also needs a `meta.RESTMapper` and `runtime.Typer`. The `meta.RESTMapper` requires a `runtime.ObjectConvertor` (spelling?) that works with `runtime.Unstructured`. The mapper and typer required discovery info, so I just put that in the kubectl util package since it didn't really seem to fit anywhere else.
Subsequent PRs will be using these in kubectl.
cc @kubernetes/sig-api-machinery @smarterclayton @liggitt @lavalamp
Automatic merge from submit-queue
Add support for PersistentVolumeClaim in Attacher/Detacher interface
The attach detach interface does not support volumes which are referenced through PVCs. This PR adds that support
Automatic merge from submit-queue
Only expose top N images in `NodeStatus`
Fix#25209
Sorted the image and only pick set top 50 sized images in node status.
cc @vishh
Automatic merge from submit-queue
Extend secrets volumes with path control
As per [1] this PR extends secrets mapped into volume with:
* key-to-path mapping the same way as is for configmap. E.g.
```
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "mypod",
"namespace": "default"
},
"spec": {
"containers": [{
"name": "mypod",
"image": "redis",
"volumeMounts": [{
"name": "foo",
"mountPath": "/etc/foo",
"readOnly": true
}]
}],
"volumes": [{
"name": "foo",
"secret": {
"secretName": "mysecret",
"items": [{
"key": "username",
"path": "my-username"
}]
}
}]
}
}
```
Here the ``spec.volumes[0].secret.items`` added changing original target ``/etc/foo/username`` to ``/etc/foo/my-username``.
* secondly, refactoring ``pkg/volumes/secrets/secrets.go`` volume plugin to use ``AtomicWritter`` to project a secret into file.
[1] https://github.com/kubernetes/kubernetes/blob/master/docs/design/configmap.md#changes-to-secret
Automatic merge from submit-queue
volume recycler: Don't start a new recycler pod if one already exists.
Recycling is a long duration process and when the recycler controller is restarted in the meantime, it should not start a new recycler pod if there is one already running.
This means that the recycler pod must have deterministic name based on name of the recycled PV, we then get name conflicts when creating the pod.
Two things need to be changed:
- recycler controller and recycler plugins must pass the PV.Name to place, where the pod is created. This is most of the patch and it should be pretty straightforward.
- create recycler pod with deterministic name and check "already exists" error.
When at it, remove useless 'resourceVersion' argument and make log messages starting with lowercase.
There is an unit test to check the behavior + there is an e2e test that checks that regular recycling is not broken (it does not try to run two recycler pods in parallel as the recycler is single-threaded now).
Automatic merge from submit-queue
Updaing QoS policy to be at the pod level
Quality of Service will be derived from an entire Pod Spec, instead of being derived from resource specifications of individual resources per-container.
A Pod is `Guaranteed` iff all its containers have limits == requests for all the first-class resources (cpu, memory as of now).
A Pod is `BestEffort` iff requests & limits are not specified for any resource across all containers.
A Pod is `Burstable` otherwise.
Note: Existing pods might be more susceptible to OOM Kills on the node due to this PR! To protect pods from being OOM killed on the node, set `limits` for all resources across all containers in a pod.
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/14943)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Check status of framework.CheckPodsRunningReady
Check status of framework.CheckPodsRunningReady and fail test if it's false, instead of silently
ignoring the failure.
This doesn't fix whatever is causing the pod not to start in #17523 but it does fail the test as soon as it detects the pod didn't start, instead of allowing the testing to proceed.
cc @kubernetes/sig-testing @spxtr @ixdy @kubernetes/rh-cluster-infra
Automatic merge from submit-queue
Updating CentOS image, adding heat back to the required cli tools.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
Updated the CentOS cloudimage to the latest available, and also added heat to the required list of cli tools. This is an interim step to replacing all the commands with openstackclient.