Automatic merge from submit-queue (batch tested with PRs 47851, 47824, 47858, 46099)
Revert 44714 manually
#44714 broke backward compatibility for old swagger spec that kubectl still uses. The decision on #47448 was to revert this change but the change was not automatically revertible. Here I semi-manually remove all references to UnixUserID and UnixGroupID and updated generated files accordingly.
Please wait for tests to pass then review that as there may still be tests that are failing.
Fixes#47448
Adding release note just because the original PR has a release note. If possible, we should remove both release notes as they cancel each other.
**Release note**: (removed by caesarxuchao)
UnixUserID and UnixGroupID is reverted back as int64 to keep backward compatibility.
Automatic merge from submit-queue (batch tested with PRs 34515, 47236, 46694, 47819, 47792)
Adding alpha feature gate to node statuses from local storage capacity isolation.
**What this PR does / why we need it**: The Capacity.storage node attribute should not be exposed since it's part of an alpha feature. Added an feature gate.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#47809
There should be a test for new statuses in the alpha feature. Will include in a different PR.
- Wrapping all node statuses from local storage capacity isolation under an alpha feature check. Currently there should not be any storage statuses.
- Replaced all "storage" statuses with "storage.kubernetes.io/scratch". "storage" should never be exposed as a status.
Automatic merge from submit-queue
Don't rerun certificate manager tests 1000 times.
**What this PR does / why we need it**:
Running every testcase 1000 times needlessly bloats the logs.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 47669, 40284, 47356, 47458, 47701)
add unit test cases for kubelet.util.sliceutils
What this PR does / why we need it:
I have not found any unit test case for this file, so i do it, thank you!
Fixes#47001
Automatic merge from submit-queue (batch tested with PRs 47626, 47674, 47683, 47290, 47688)
validate host paths on the kubelet for backsteps
**What this PR does / why we need it**:
This PR adds validation on the kubelet to ensure the host path does not contain backsteps that could allow the volume to escape the PSP's allowed host paths. Currently, there is validation done at in API server; however, that does not account for mismatch of OS's on the kubelet vs api server.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#47107
**Special notes for your reviewer**:
cc @liggitt
**Release note**:
```release-note
Paths containing backsteps (for example, "../bar") are no longer allowed in hostPath volume paths, or in volumeMount subpaths
```
Automatic merge from submit-queue (batch tested with PRs 47523, 47438, 47550, 47450, 47612)
append KUBE-HOSTPORTS to system chains instead of prepend
Bug fix for conflicting iptables rules between hostport and kube-proxy
Automatic merge from submit-queue
Strip container id from events
**What this PR does / why we need it**:
reduces spam events from kubelet in bad pod scenarios
**Which issue this PR fixes**:
relates to https://github.com/kubernetes/kubernetes/issues/47366
**Special notes for your reviewer**:
pods in permanent failure states created unique events
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 46441, 43987, 46921, 46823, 47276)
kubelet/network: report but tolerate errors returned from GetNetNS() v2
Runtimes should never return "" and nil errors, since network plugin
drivers need to treat netns differently in different cases. So return
errors when we can't get the netns, and fix up the plugins to do the
right thing.
Namely, we don't need a NetNS on pod network teardown. We do need
a netns for pod Status checks and for network setup.
V2: don't return errors from getIP(), since they will block pod status :( Just log them. But even so, this still fixes the original problem by ensuring we don't log errors when the network isn't ready.
@freehan @yujuhong
Fixes: https://github.com/kubernetes/kubernetes/issues/42735
Fixes: https://github.com/kubernetes/kubernetes/issues/44307
Automatic merge from submit-queue (batch tested with PRs 47000, 47188, 47094, 47323, 47124)
fix sync loop health check
This PR will do error logging about the fall behind sync for kubelet instead of sync loop healthz checking.
The reason is kubelet can not do sync loop and therefore can not update sync loop time when there is any runtime error, such as docker hung.
When there is any runtime error, according to current implementation, kubelet will not do sync operation and thus kubelet's sync loop time will not be updated. This will make when there is any runtime error, kubelet will also return non 200 response status code when accessing healthz endpoint. This is contrary with #37865 which prevents kubelet from being killed when docker hangs.
**Release note**:
```release-note
fix sync loop health check with seperating runtime errors
```
/cc @yujuhong @Random-Liu @dchen1107
GenericPLEG's 1s relist() loop races against pod network setup. It
may be called after the infra container has started but before
network setup is done, since PLEG and the runtime's SyncPod() run
in different goroutines.
Track network setup status and don't bother trying to read the pod's
IP address if networking is not yet ready.
See also: https://bugzilla.redhat.com/show_bug.cgi?id=1434950
Mar 22 12:18:17 ip-172-31-43-89 atomic-openshift-node: E0322
12:18:17.651013 25624 docker_manager.go:378] NetworkPlugin
cni failed on the status hook for pod 'pausepods22' - Unexpected
command output Device "eth0" does not exist.
Runtimes should never return "" and nil errors, since network plugin
drivers need to treat netns differently in different cases. So return
errors when we can't get the netns, and fix up the plugins to do the
right thing.
Namely, we don't need a NetNS on pod network teardown. We do need
a netns for pod Status checks and for network setup.
This reverts commit fee4c9a7d9.
This is not the correct fix for the problem; and it causes other problems
like continuous:
docker_sandbox.go:234] NetworkPlugin cni failed on the status hook for pod
"someotherdc-1-deploy_default": Unexpected command output nsenter: cannot
open : No such file or directory with error: exit status 1
Because GetNetNS() is returning an empty network namespace. That is
not helpful nor should really be allowed; that's what the error return
from GetNetNS() is for.
Automatic merge from submit-queue (batch tested with PRs 45877, 46846, 46630, 46087, 47003)
gpusInUse info error when kubelet restarts
**What this PR does / why we need it**:
In my test, I found 2 errors in the nvidia_gpu_manager.go.
1. the number of activePods in gpusInUse() equals to 0 when kubelet restarts. It seems the Start() method was called before pods recovery which caused this error. So I decide not to call gpusInUse() in the Start() function, just let it happen when new pod needs to be created.
2. the container.ContainerID in line 242 returns the id in format of "docker://<container_id>", this will make the client failed to inspect the container by id. We have to erase the prefix of "docker://".
**Special notes for your reviewer**:
**Release note**:
```
Avoid assigning the same GPU to multiple containers.
```
Automatic merge from submit-queue (batch tested with PRs 45877, 46846, 46630, 46087, 47003)
func parseEndpointWithFallbackProtocol should check if protocol of endpoint is empty
**What this PR does / why we need it**:
func parseEndpointWithFallbackProtocol should check if protocol of endpoint is empty
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: part of #45927
NONE
**Special notes for your reviewer**:
NONE
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 43005, 46660, 46385, 46991, 47103)
Consolidate sysctl commands for kubelet
**What this PR does / why we need it**:
These commands are important enough to be in the Kubelet itself.
By default, Ubuntu 14.04 and Debian Jessie have these set to 200 and
20000. Without this setting, nodes are limited in the number of
containers that they can start.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#26005
**Special notes for your reviewer**:
I had a difficult time writing tests for this. It is trivial to create a fake sysctl for testing, but the Kubelet does not have any tests for the prior settings.
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 46775, 47009)
kuberuntime: check the value of RunAsNonRoot when verifying
The verification function is fixed to check the value of RunAsNonRoot,
not just the existence of it. Also adds unit tests to verify the correct
behavior.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#46996
**Release note**:
```release-note
Fix the bug where container cannot run as root when SecurityContext.RunAsNonRoot is false.
```
The verification function is fixed to check the value of RunAsNonRoot,
not just the existence of it. Also adds unit tests to verify the correct
behavior.
This PR adds two features:
1. add support for isolating the emptyDir volume use. If user
sets a size limit for emptyDir volume, kubelet's eviction manager
monitors its usage
and evict the pod if the usage exceeds the limit.
2. add support for isolating the local storage for container overlay. If
the container's overly usage exceeds the limit defined in container
spec, eviction manager will evict the pod.
Automatic merge from submit-queue (batch tested with PRs 46734, 46810, 46759, 46259, 46771)
Improve code coverage for pkg/kubelet/images/image_gc_manager
**What this PR does / why we need it**:
#39559#40780
code coverage from 74.5% to 77.4%
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Delete all dead containers and sandboxes when under disk pressure.
This PR modifies the eviction manager to add dead container and sandbox garbage collection as a resource reclaim function for disk. It also modifies the container GC logic to allow pods that are terminated, but not deleted to be removed.
It still does not delete containers that are less than the minGcAge. This should prevent nodes from entering a permanently bad state if the entire disk is occupied by pods that are terminated (in the state failed, or succeeded), but not deleted.
There are two improvements we should consider making in the future:
- Track the disk space and inodes reclaimed by deleting containers. We currently do not track this, and it prevents us from determining if deleting containers resolves disk pressure. So we may still evict a pod even if we are able to free disk space by deleting dead containers.
- Once we can track disk space and inodes reclaimed, we should consider only deleting the containers we need to in order to relieve disk pressure. This should help avoid a scenario where we try and delete a massive number of containers all at once, and overwhelm the runtime.
/assign @vishh
cc @derekwaynecarr
```release-note
Disk Pressure triggers the deletion of terminated containers on the node.
```
Automatic merge from submit-queue
reset resultRun on pod restart
xref https://bugzilla.redhat.com/show_bug.cgi?id=1455056
There is currently an issue where, if the pod is restarted due to liveness probe failures exceeding failureThreshold, the failure count is not reset on the probe worker. When the pod restarts, if the liveness probe fails even once, the pod is restarted again, not honoring failureThreshold on the restart.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
command:
- sleep
- "3600"
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
timeoutSeconds: 1
periodSeconds: 3
successThreshold: 1
failureThreshold: 5
terminationGracePeriodSeconds: 0
```
Before this PR:
```
$ kubectl create -f busybox-probe-fail.yaml
pod "busybox" created
$ kubectl get pod -w
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 4s
busybox 1/1 Running 1 24s
busybox 1/1 Running 2 33s
busybox 0/1 CrashLoopBackOff 2 39s
```
After this PR:
```
$ kubectl create -f busybox-probe-fail.yaml
$ kubectl get pod -w
NAME READY STATUS RESTARTS AGE
busybox 0/1 ContainerCreating 0 2s
busybox 1/1 Running 0 4s
busybox 1/1 Running 1 27s
busybox 1/1 Running 2 45s
```
```release-note
Fix kubelet reset liveness probe failure count across pod restart boundaries
```
Restarts are now happen at even intervals.
@derekwaynecarr
Automatic merge from submit-queue (batch tested with PRs 46782, 46719, 46339, 46609, 46494)
Do not log the content of pod manifest if parsing fails.
**What this PR does / why we need it**:
- ~~only accepts text/plain config file~~
- ~~not log config file content when it's invalid~~
Do not log the content of pod manifest if parsing fails.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#46493
**Special notes for your reviewer**:
/cc @yujuhong
@sig-node-reviewers
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 46782, 46719, 46339, 46609, 46494)
Fix inconsistency in finding cni binaries
Fixes [#46476]
Signed-off-by: Abhinav Dahiya <abhinav.dahiya@coreos.com>
**What this PR does / why we need it**:
This fixes the inconsistency in finding the appropriate cni binaries.
Currently `lo` cniNetwork follows vendorCniDir > binDir whereas default for all others is binDir > vendorCniDir. This PR makes vendorCniDir > binDir as default behavior.
**Why we need it**:
Hypercube right now ships cni binaries in /opt/cni/bin.
And to use latest version of calico you need to override kubelet's /opt/cni/bin from host which means all other cni plugins (flannel, loopback etc...) have to be mounted from host too. Keeping vendordir at higher order allows easy installation of newer versions of plugins.
Automatic merge from submit-queue (batch tested with PRs 46620, 46732, 46773, 46772, 46725)
Improving test coverage for kubelet/kuberuntime.
**What this PR does / why we need it**:
Increases test coverage for kubelet/kuberuntime
https://github.com/kubernetes/kubernetes/issues/46123
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
https://github.com/kubernetes/kubernetes/issues/46123
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```