This change add a container manager inside the dockershim to move docker daemon
and associated processes to a specified cgroup. The original kubelet container
manager will continue checking the name of the cgroup, so that kubelet know how
to report runtime stats.
Automatic merge from submit-queue
Eviction manager evicts based on inode consumption
Fixes: #32526 Integrate Cadvisor per-container inode stats into the summary api. Make the eviction manager act based on inode consumption to evict pods using the most inodes.
This PR is pending on a cadvisor godeps update which will be included in PR #35136
Neutron's API ignores unknown paramaters. When listing pools etc, K8
attempts to filter on "LoadBalancerID", which is not a valid filter.
As such, it is ignored by Neutron, and a list of all pools is
returned. K8 then proceeds to update each of the pools.
Instead, we now double check the resources really belong to the LB
we're trying to update.
Automatic merge from submit-queue
Add tooling to generate listers
Add lister-gen tool to auto-generate listers. So far this PR only demonstrates replacing the manually-written `StoreToLimitRangeLister` with the generated `LimitRangeLister`, as it's a small and easy swap.
cc @deads2k @liggitt @sttts @nikhiljindal @lavalamp @smarterclayton @derekwaynecarr @kubernetes/sig-api-machinery @kubernetes/rh-cluster-infra
Automatic merge from submit-queue
Only set sysctls for infra containers
We did set the sysctls for each container in a pod. This opens up a way to set un-whitelisted sysctls during upgrade from v1.3:
- set annotation in v1.3 with an un-whitelisted sysctl. Set restartPolicy=Always
- upgrade cluster to v1.4
- kill container process
- un-whitelisted sysctl is set on restart of the killed container.
Automatic merge from submit-queue
remove non-reuseable bits of MasterServer
Scrub `master.go` again. I think I'm pretty happy with this shape. I may promote `InstallAPIs` since we're likely to want it downstream.
Automatic merge from submit-queue
SELinux Overhaul
Overhauls handling of SELinux in Kubernetes. TLDR: Kubelet dir no longer has to be labeled `svirt_sandbox_file_t`.
Fixes#33351 and #33510. Implements #33951.
Automatic merge from submit-queue
Correct the article in generated documents
**What this PR does / why we need it**:
Fix the article in generated docs for "create/delete [article] [kind]"
**Which issue this PR fixes**
fixes#32305
**Special notes for your reviewer**:
None
**Release note**:
``` release-note
Correct the article in generated documents
```
For example:
"a Ingress" > "an Ingress"
Automatic merge from submit-queue
Alternative unsafe copy
Have run this for 2 hours in the stresser without an error (no guarantee).
@wojtek-t can we do a 500 kubemark run with this prior to merge?
Automatic merge from submit-queue
Allow apiserver to choose preferred kubelet address type
Follow up to #33718 to stay compatible with clusters using DNS names for master->node communications. Adds the `--kubelet-preferred-address-types` apiserver flag for clusters that prefer a different node address type.
```release-note
The apiserver can now select which type of kubelet-reported address to use for master->node communications, using the --kubelet-preferred-address-types flag.
```
Automatic merge from submit-queue
quota controller uses informers if available for pod calculation
This PR does the following:
1. plumb informer factory into quota registry and evaluators
2. pod quota evaluator uses informers for determining aggregrate usage instead of making direct calls
3. admission code path does not use informers because
1. we do not want to add new watches in apiserver
2. admission code path does not require aggregate usage calculation
As a result, quota controller is much faster in re-calculating quota usage when it observes a pod deletion.
Follow-on PRs will make similar changes for other informer backed resources (pvcs next).
/cc @deads2k @mfojtik @smarterclayton @kubernetes/rh-cluster-infra
Automatic merge from submit-queue
Initialize CIDR allocator before registering handle functions
Currently we start shared informers after everything is already created, but this change make it future-proof.
cc @davidopp @kevin-wangzefeng @foxish
Automatic merge from submit-queue
update list of vailable resources
Hi,
kubectl get --help produce a list of resource types and aliases :
```
Valid resource types include:
* clusters (valid only for federation apiservers)
* componentstatuses (aka 'cs')
...
```
``` release-note
Update the list of resources in kubectl get --help
```
The list is currently outdated (for exemple missing networkpolicy).
http://kubernetes.io/docs/user-guide/kubectl-overview/#resource-types has the same data and is also outdated.
The patch updates these 2 lists.
Automatic merge from submit-queue
Misc master and federation cleanups
- misc small cleanups
- make ServerRunOption embeddings explicit in order to make the technical debt in our plumbing code visible.
Automatic merge from submit-queue
Remove stale volumes if endpoint/svc creation fails.
Remove stale volumes if endpoint/svc creation fails.
Signed-off-by: Humble Chirammal hchiramm@redhat.com
Automatic merge from submit-queue
add kubectl cp
Implements `kubectl cp` (https://github.com/kubernetes/kubernetes/issues/13776)
Syntax examples:
```sh
# Copy from pod to local machine
$ kubectl cp [namespace/]pod:/some/file/or/dir ./some/local/file/or/dir
# Copy from local machine to pod
$ kubectl cp /some/local/file/or/dir [namespace/]pod:/some/remote/file/or/dir
```
@deads2k @smarterclayton @kubernetes/sig-cli
Automatic merge from submit-queue
Better kubectl run validations
Adds more validations to flags that must be mutually exclusive in `kubectl run`. For example, `--dry-run` must not be used with `--attach`, `--stdin` or `--tty`. Adds unit tests for these new validations and some previously existing ones.
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
NONE
```
Automatic merge from submit-queue
fixed some issues with kubectl set resources
when using kubectl set resources it resets all resource fields that are not being set.
for example
# kubectl set resources deployments nginx --limits=cpu=100m
followed by
# kubectl set resources deployments nginx --limits=memory=256Mi
would result in the nginx deployment only limiting memory at 256Mi with the previous
limit placed on the cpu being wiped out. This behavior is corrected so that each invocation
only modifies fields set in that command and changed the testing so that the desired behavior
is checked.
Also a typo:
you must specify an update to requests or limits or (in the form of --requests/--limits)
corrected to
you must specify an update to requests or limits (in the form of --requests/--limits)
Implemented both the dry run and local flags.
Added test cases to show that both flags are operating as intended.
Removed the print statement "running in local mode" as in PR#35112
Automatic merge from submit-queue
Remove Job also from .status.active for Replace strategy
When iterating over list of Jobs we're removing each of them when strategy is replace. Unfortunately, the job reference was not removed from `.status.active` which cause the controller trying to remove it once again during next run and failed removing what was already removed during previous run. This was cause by not removing the reference previously. This PR fixes that and cleans logs a bit, in that controller.
@erictune fyi
@janetkuo ptal
Automatic merge from submit-queue
Update drain test
Update how int convert to string in the kubectl drain test.
It is safer to use `strconv.Itoa()` than `string()`.
Automatic merge from submit-queue
Require PV provisioner secrets to match type
In 1.5, PV provisioners are allowing targeting namespaced secrets via storageclass params. This adds a requirement that those secrets' type match the volume provisioner plugin name, to prevent targeting and extraction of arbitrary secrets
Helps limit secret targeting issues mentioned in https://github.com/kubernetes/kubernetes/issues/34822
Automatic merge from submit-queue
Add "PrintErrorWithCauses" cmdutil helper
**Release note**:
```release-note
NONE
```
This patch adds a new helper function to `cmd/util/helpers.go` that
handles errors containing collections of causes and prints each cause in
a separate newline.
Automatic merge from submit-queue
Verify and update client-go staging area for every PR
We need to keep the staging area up-to-date to prevent PRs from breaking client-go.
It's marked as "WIP" because we need to decide the [versioning strategy](https://github.com/kubernetes/client-go/issues/9) for client-go first. This PR contains breaking changes for client-go.
This is blocking #29934 and potentially #34441
cc @kubernetes/sig-api-machinery
Automatic merge from submit-queue
Let release_1_5 clientset include multiple versions of a group
Fix#35237
This PR make versioned clientset to include multiple versions of a group. Currently only `batch` has `v1` and `v2alpha1`. The clientset interface now looks like:
```go
BatchV2alpha1() v2alpha1batch.BatchV2alpha1Interface
BatchV1() v1batch.BatchV1Interface
// Deprecated: please explicitly pick a version if possible.
Batch() v1batch.BatchV1Interface
```
Commit "update client-gen to say internalversion rather than unversioned" fixes https://github.com/kubernetes/kubernetes/issues/24481.
cc @kubernetes/sig-api-machinery @soltysh @deads2k @nikhiljindal
```release-note
release_1_5 clientset supports multiple versions of a group.
```
Automatic merge from submit-queue
Make overlapping deployments deletable
@kubernetes/deployment ptal
Fixes https://github.com/kubernetes/kubernetes/issues/34466 by 1) not adding the overlapping annotation in the working deployment, 2) updates observedGeneration for overlapping deployments, and 3) updates the kubectl deployment reaper to do non-cascading deletion for deployments with the overlapping annotation.
Automatic merge from submit-queue
Add boilerplate to `kubectl completion bash`
**What this PR does / why we need it**:
Small refactor to make kubectl bash and zsh completion share
boilerplate. Previously the boilerplate was not included in the bash
script.
Automatic merge from submit-queue
support editing before creating resource
Support `kubectl create -f config.yaml --edit`
Support editing before creating resource from files, urls and stdin.
The behavior is similar to `kubectl edit`
It won't create anything when edit make no change.
partial: #18064
Based on: #33686 and #33973
```release-note
Support editing before creating resource from files, urls and stdin, e.g. `kubectl create -f config.yaml --edit`
It won't create anything when edit make no change.
```
Automatic merge from submit-queue
convert SA controller to shared informers
convert the SA controller to shared informer + workqueue.
I think one of @derekwaynecarr @ncdc or @liggitt
Automatic merge from submit-queue
Implement streaming CRI methods in dockershim
*NOTE: Temporarily includes commit from https://github.com/kubernetes/kubernetes/pull/35330 - only review the second commit.*
Builds on https://github.com/kubernetes/kubernetes/pull/35330, using the library to implement the streaming methods in various CRI shims.
This does not actually wire up the new streaming methods in the kubelet (that will be my next PR). Once the new methods are wired up, I will delete the `Legacy{Exec,Attach,PortForward}` methods.
/cc @kubernetes/sig-node @feiskyer
Automatic merge from submit-queue
allow authentication through a front-proxy
This allows a front proxy to set a request header and have that be a valid `user.Info` in the authentication chain. To secure this power, a client certificate may be used to confirm the identity of the front proxy
@kubernetes/sig-auth fyi
@erictune per-request
@liggitt you wrote the openshift one, ptal.
Automatic merge from submit-queue
Implement package `triple` with utilities to generate certificate-key pairs for CA, server and clients.
Please review only the last commit here. This is based on PRs #35592 which will be reviewed independently.
Design Doc: PR #34484
cc @kubernetes/sig-cluster-federation @quinton-hoole @mwielgus
Automatic merge from submit-queue
Simplify negotiation in server in preparation for multi version support
This is a pre-factor for #33900 to simplify runtime.NegotiatedSerializer, tighten up a few abstractions that may break when clients can request different client versions, and pave the way for better negotiation.
View this as pure simplification.
Automatic merge from submit-queue
Fix cadvisor_unsupported and the crossbuild
Resolves a bug in the `cadvisor_unsupported.go` code.
Fixes https://github.com/kubernetes/kubernetes/issues/35735
Introduced by: https://github.com/kubernetes/kubernetes/pull/35136
We should consider to cherrypick this as #35136 also was cherrypicked
cc @kubernetes/sig-testing @vishh @dashpole @jessfraz
```release-note
Fix cadvisor_unsupported and the crossbuild
```
Automatic merge from submit-queue
[PHASE 1] Opaque integer resource accounting.
## [PHASE 1] Opaque integer resource accounting.
This change provides a simple way to advertise some amount of arbitrary countable resource for a node in a Kubernetes cluster. Users can consume these resources by including them in pod specs, and the scheduler takes them into account when placing pods on nodes. See the example at the bottom of the PR description for more info.
Summary of changes:
- Defines opaque integer resources as any resource with prefix `pod.alpha.kubernetes.io/opaque-int-resource-`.
- Prevent kubelet from overwriting capacity.
- Handle opaque resources in scheduler.
- Validate integer-ness of opaque int quantities in API server.
- Tests for above.
Feature issue: https://github.com/kubernetes/features/issues/76
Design: http://goo.gl/IoKYP1
Issues:
kubernetes/kubernetes#28312kubernetes/kubernetes#19082
Related:
kubernetes/kubernetes#19080
CC @davidopp @timothysc @balajismaniam
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
Added support for accounting opaque integer resources.
Allows cluster operators to advertise new node-level resources that would be
otherwise unknown to Kubernetes. Users can consume these resources in pod
specs just like CPU and memory. The scheduler takes care of the resource
accounting so that no more than the available amount is simultaneously
allocated to pods.
```
## Usage example
```sh
$ echo '[{"op": "add", "path": "pod.alpha.kubernetes.io~1opaque-int-resource-bananas", "value": "555"}]' | \
> http PATCH http://localhost:8080/api/v1/nodes/localhost.localdomain/status \
> Content-Type:application/json-patch+json
```
```http
HTTP/1.1 200 OK
Content-Type: application/json
Date: Thu, 11 Aug 2016 16:44:55 GMT
Transfer-Encoding: chunked
{
"apiVersion": "v1",
"kind": "Node",
"metadata": {
"annotations": {
"volumes.kubernetes.io/controller-managed-attach-detach": "true"
},
"creationTimestamp": "2016-07-12T04:07:43Z",
"labels": {
"beta.kubernetes.io/arch": "amd64",
"beta.kubernetes.io/os": "linux",
"kubernetes.io/hostname": "localhost.localdomain"
},
"name": "localhost.localdomain",
"resourceVersion": "12837",
"selfLink": "/api/v1/nodes/localhost.localdomain/status",
"uid": "2ee9ea1c-47e6-11e6-9fb4-525400659b2e"
},
"spec": {
"externalID": "localhost.localdomain"
},
"status": {
"addresses": [
{
"address": "10.0.2.15",
"type": "LegacyHostIP"
},
{
"address": "10.0.2.15",
"type": "InternalIP"
}
],
"allocatable": {
"alpha.kubernetes.io/nvidia-gpu": "0",
"cpu": "2",
"memory": "8175808Ki",
"pods": "110"
},
"capacity": {
"alpha.kubernetes.io/nvidia-gpu": "0",
"pod.alpha.kubernetes.io/opaque-int-resource-bananas": "555",
"cpu": "2",
"memory": "8175808Ki",
"pods": "110"
},
"conditions": [
{
"lastHeartbeatTime": "2016-08-11T16:44:47Z",
"lastTransitionTime": "2016-07-12T04:07:43Z",
"message": "kubelet has sufficient disk space available",
"reason": "KubeletHasSufficientDisk",
"status": "False",
"type": "OutOfDisk"
},
{
"lastHeartbeatTime": "2016-08-11T16:44:47Z",
"lastTransitionTime": "2016-07-12T04:07:43Z",
"message": "kubelet has sufficient memory available",
"reason": "KubeletHasSufficientMemory",
"status": "False",
"type": "MemoryPressure"
},
{
"lastHeartbeatTime": "2016-08-11T16:44:47Z",
"lastTransitionTime": "2016-08-10T06:27:11Z",
"message": "kubelet is posting ready status",
"reason": "KubeletReady",
"status": "True",
"type": "Ready"
},
{
"lastHeartbeatTime": "2016-08-11T16:44:47Z",
"lastTransitionTime": "2016-08-10T06:27:01Z",
"message": "kubelet has no disk pressure",
"reason": "KubeletHasNoDiskPressure",
"status": "False",
"type": "DiskPressure"
}
],
"daemonEndpoints": {
"kubeletEndpoint": {
"Port": 10250
}
},
"images": [],
"nodeInfo": {
"architecture": "amd64",
"bootID": "1f7e95ca-a4c2-490e-8ca2-6621ae1eb5f0",
"containerRuntimeVersion": "docker://1.10.3",
"kernelVersion": "4.5.7-202.fc23.x86_64",
"kubeProxyVersion": "v1.3.0-alpha.4.4285+7e4b86c96110d3-dirty",
"kubeletVersion": "v1.3.0-alpha.4.4285+7e4b86c96110d3-dirty",
"machineID": "cac4063395254bc89d06af5d05322453",
"operatingSystem": "linux",
"osImage": "Fedora 23 (Cloud Edition)",
"systemUUID": "D6EE0782-5DEB-4465-B35D-E54190C5EE96"
}
}
}
```
After patching, the kubelet's next sync fills in allocatable:
```
$ kubectl get node localhost.localdomain -o json | jq .status.allocatable
```
```json
{
"alpha.kubernetes.io/nvidia-gpu": "0",
"pod.alpha.kubernetes.io/opaque-int-resource-bananas": "555",
"cpu": "2",
"memory": "8175808Ki",
"pods": "110"
}
```
Create two pods, one that needs a single banana and another that needs a truck load:
```
$ kubectl create -f chimp.yaml
$ kubectl create -f superchimp.yaml
```
Inspect the scheduler result and pod status:
```
$ kubectl describe pods chimp
Name: chimp
Namespace: default
Node: localhost.localdomain/10.0.2.15
Start Time: Thu, 11 Aug 2016 19:58:46 +0000
Labels: <none>
Status: Running
IP: 172.17.0.2
Controllers: <none>
Containers:
nginx:
Container ID: docker://46ff268f2f9217c59cc49f97cc4f0f085d5ac0e251f508cc08938601117c0cec
Image: nginx:1.10
Image ID: docker://sha256:82e97a2b0390a20107ab1310dea17f539ff6034438099384998fd91fc540b128
Port: 80/TCP
Limits:
cpu: 500m
memory: 64Mi
pod.alpha.kubernetes.io/opaque-int-resource-bananas: 3
Requests:
cpu: 250m
memory: 32Mi
pod.alpha.kubernetes.io/opaque-int-resource-bananas: 1
State: Running
Started: Thu, 11 Aug 2016 19:58:51 +0000
Ready: True
Restart Count: 0
Volume Mounts: <none>
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
No volumes.
QoS Class: Burstable
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
9m 9m 1 {default-scheduler } Normal Scheduled Successfully assigned chimp to localhost.localdomain
9m 9m 2 {kubelet localhost.localdomain} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
9m 9m 1 {kubelet localhost.localdomain} spec.containers{nginx} Normal Pulled Container image "nginx:1.10" already present on machine
9m 9m 1 {kubelet localhost.localdomain} spec.containers{nginx} Normal Created Created container with docker id 46ff268f2f92
9m 9m 1 {kubelet localhost.localdomain} spec.containers{nginx} Normal Started Started container with docker id 46ff268f2f92
```
```
$ kubectl describe pods superchimp
Name: superchimp
Namespace: default
Node: /
Labels: <none>
Status: Pending
IP:
Controllers: <none>
Containers:
nginx:
Image: nginx:1.10
Port: 80/TCP
Requests:
cpu: 250m
memory: 32Mi
pod.alpha.kubernetes.io/opaque-int-resource-bananas: 10Ki
Volume Mounts: <none>
Environment Variables: <none>
Conditions:
Type Status
PodScheduled False
No volumes.
QoS Class: Burstable
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
3m 1s 15 {default-scheduler } Warning FailedScheduling pod (superchimp) failed to fit in any node
fit failure on node (localhost.localdomain): Insufficient pod.alpha.kubernetes.io/opaque-int-resource-bananas
```
Automatic merge from submit-queue
Add sync state loop in master's volume reconciler
At master volume reconciler, the information about which volumes are
attached to nodes is cached in actual state of world. However, this
information might be out of date in case that node is terminated (volume
is detached automatically). In this situation, reconciler assume volume
is still attached and will not issue attach operation when node comes
back. Pods created on those nodes will fail to mount.
This PR adds the logic to periodically sync up the truth for attached
volumes kept in
the actual state cache. If the volume is no longer attached to the node,
the actual state will be updated to reflect the truth. In turn,
reconciler will take actions if needed.
To avoid issuing many concurrent operations on cloud provider, this PR
tries to add batch operation to check whether a list of volumes are
attached to the node instead of one request per volume.
- Prevents kubelet from overwriting capacity during sync.
- Handles opaque integer resources in the scheduler.
- Adds scheduler predicate tests for opaque resources.
- Validates opaque int resources:
- Ensures supplied opaque int quantities in node capacity,
node allocatable, pod request and pod limit are integers.
- Adds tests for new validation logic (node update and pod spec).
- Added e2e tests for opaque integer resources.
Automatic merge from submit-queue
Moving some force deletion logic from the NC into the PodGC
**What this PR does / why we need it**: Moves some pod force-deletion behavior into the PodGC, which is a better place for these.
This should be a NOP because we're just moving functionality
around and thanks to #35476, the podGC controller should always
run.
Related: https://github.com/kubernetes/kubernetes/pull/34160, https://github.com/kubernetes/kubernetes/issues/35145
cc @gmarek @kubernetes/sig-apps
At master volume reconciler, the information about which volumes are
attached to nodes is cached in actual state of world. However, this
information might be out of date in case that node is terminated (volume
is detached automatically). In this situation, reconciler assume volume
is still attached and will not issue attach operation when node comes
back. Pods created on those nodes will fail to mount.
This PR adds the logic to periodically sync up the truth for attached volumes kept in the actual state cache. If the volume is no longer attached to the node, the actual state will be updated to reflect the truth. In turn, reconciler will take actions if needed.
To avoid issuing many concurrent operations on cloud provider, this PR
tries to add batch operation to check whether a list of volumes are
attached to the node instead of one request per volume.
More details are explained in PR #33760
Alter how runtime.SerializeInfo is represented to simplify negotiation
and reduce the need to allocate during negotiation. Simplify the dynamic
client's logic around negotiating type. Add more tests for media type
handling where necessary.
This patch adds a new helper function to cmd/util/helpers.go that
handles errors containing collections of causes and prints each cause in
a separate newline.
Automatic merge from submit-queue
Fixes for golint tip
golint as of
3390df4df2/lint.go (L1440-L1442)
requires that any function that has a context.Context argument have said
argument in the first position. This commit fixes the one function we had where
it wasn't.
cc @timothysc @wojtek-t @smarterclayton @jessfraz @kubernetes/rh-cluster-infra @liggitt @sttts @deads2k @eparis
when using kubectl set resources it resets all resource fields that are not being set.
for example
# kubectl set resources deployments nginx --limits=cpu=100m
followed by
# kubectl set resources deployments nginx --limits=memory=256Mi
would result in the nginx deployment only limiting memory at 256Mi with the previous
limit placed on the cpu being wiped out. This behavior is corrected so that each invocation
only modifies fields set in that command and changed the testing so that the desired behavior
is checked.
Also a typo:
you must specify an update to requests or limits or (in the form of --requests/--limits)
corrected to
you must specify an update to requests or limits (in the form of --requests/--limits)
changelog:
- fixed a typo in hack/make-rules/test-cmd.sh "effecting" to "affecting"
golint as of
3390df4df2/lint.go (L1440-L1442)
requires that any function that has a context.Context argument have said
argument in the first position. This commit fixes the one function we had where
it wasn't.
Automatic merge from submit-queue
remove the non-generated client
Removes the non-generated client from kube. The package has a few methods left, but nothing that needs updating when adding new groups.
@ingvagabund
Automatic merge from submit-queue
Adding a root filesystem override for kubelet mounter
This is necessary to get hostPath volumes to work with containerized kubelet mounter
Automatic merge from submit-queue
Fix potential panic in namespace controller when rapidly create/delet…
Fixes https://github.com/kubernetes/kubernetes/issues/33676
The theory is this could occur in either of the following scenarios:
1. HA environment where a GET to a different API server than what the WATCH was read from
1. In a many controller scenario (i.e. where multiple finalizers participate), a namespace that is created and deleted with the same name could trip up the other namespace controller to see a namespace with the same name that was not actually in a delete state. Added checks to verify uid matches across retry operations.
/cc @liggitt @kubernetes/rh-cluster-infra
Automatic merge from submit-queue
First pass at CRI stream server library implementation
This is a first pass at implementing a library for serving attach/exec/portforward calls from a CRI shim process as discussed in [CRI Streaming Requests](https://docs.google.com/document/d/1OE_QoInPlVCK9rMAx9aybRmgFiVjHpJCHI9LrfdNM_s/edit#).
Remaining library work:
- implement authn/z
- implement `stayUp=false`, a.k.a. auto-stop the server once all connections are closed
/cc @kubernetes/sig-node
Automatic merge from submit-queue
Make the fake RESTClient usable by all the API groups, not just core.
cc @kubernetes/sig-cluster-federation @quinton-hoole @nikhiljindal
Automatic merge from submit-queue
Adding cascading deletion support to federated namespaces
Ref https://github.com/kubernetes/kubernetes/issues/33612
With this change, whenever a federated namespace is deleted with `DeleteOptions.OrphanDependents = false`, then federation namespace controller first deletes the corresponding namespaces from all underlying clusters before deleting the federated namespace.
cc @kubernetes/sig-cluster-federation @caesarxuchao
```release-note
Adding support for DeleteOptions.OrphanDependents for federated namespaces. Setting it to false while deleting a federated namespace also deletes the corresponding namespace from all registered clusters.
```
Automatic merge from submit-queue
Add sysctls for dockershim
This PR adds sysctls support for dockershim. All sysctls e2e tests are passed in my local settings.
Note that sysctls runtimeAdmit is not included in this PR, it is addressed in #32803.
cc/ @yujuhong @Random-Liu
Automatic merge from submit-queue
CRI: Enable remote dockershim by default
Enable remote dockershim by default.
Once the grpc integration is stabilized, I'll remove the temporary knob and configure container runtime endpoint in all test suite.
@yujuhong @feiskyer
/cc @kubernetes/sig-node