Automatic merge from submit-queue (batch tested with PRs 52442, 52247, 46542, 52363, 51781)
Make CPU manager release CPUs when Pod enters completed phase.
**What this PR does / why we need it**: When CPU manager is enabled, this PR releases allocated CPUs when container is not running and is non-restartable.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52351
**Special notes for your reviewer**:
This bug is only reproduced for pods with `restartPolicy` = `Never` or `OnFailure`. The following output is from a 4 CPU node. This bug can be reproduced as long >= half the cores are requested.
pod1.yaml:
```
apiVersion: v1
kind: Pod
metadata:
name: test-pod1
spec:
containers:
- image: ubuntu
command: ["/bin/bash"]
args: ["-c", "sleep 5"]
name: test-container1
resources:
requests:
cpu: 2
memory: 100Mi
limits:
cpu: 2
memory: 100Mi
restartPolicy: "Never"
```
pod2.yaml:
```
apiVersion: v1
kind: Pod
metadata:
name: test-pod2
spec:
containers:
- image: ubuntu
command: ["/bin/bash"]
args: ["-c", "sleep 5"]
name: test-container1
resources:
requests:
cpu: 2
memory: 100Mi
limits:
cpu: 2
memory: 100Mi
restartPolicy: "Never"
```
Run a local Kubernetes cluster with CPU manager enabled.
```sh
KUBELET_FLAGS='--feature-gates=CPUManager=true --cpu-manager-policy=static --cpu-manager-reconcile-period=1s --kube-reserved=cpu=500m' ./hack/local-up-cluster.sh
```
_Before:_
Create `test-pod1` using pod1.yaml.
```
./cluster/kubectl.sh create -f pod1.yaml
```
Wait for the pod to complete and wait another 90 seconds (give enough time for GC to kick-in).
Create `test-pod2` using pod2.yaml.
```
./cluster/kubectl.sh create -f pod2.yaml
```
Get all pods in the cluster.
```
./cluster/kubectl.sh get pods -a
NAME READY STATUS RESTARTS AGE
test-pod1 0/1 Completed 0 1m
test-pod2 0/1 not enough cpus available to satisfy request 0 9s
```
_After:_
Create `test-pod1` using pod1.yaml.
```
./cluster/kubectl.sh create -f pod1.yaml
```
Wait for the pod to complete and wait another 90 seconds (give enough time for GC to kick-in).
Create `test-pod2` using pod2.yaml.
```
./cluster/kubectl.sh create -f pod2.yaml
```
Get all pods in the cluster.
```
./cluster/kubectl.sh get pods -a
NAME READY STATUS RESTARTS AGE
test-pod1 0/1 Completed 0 1m
test-pod2 0/1 Completed 0 9s
```
Automatic merge from submit-queue (batch tested with PRs 52442, 52247, 46542, 52363, 51781)
Ignore pods for quota marked for deletion whose node is unreachable
**What this PR does / why we need it**:
Traditionally, we charge to quota all pods that are in a non-terminal phase. We have a user report that noted the behavior change in kube 1.5 for the node controller to no longer force delete pods whose nodes have been lost. Instead, the pod is marked for deletion, and the reason is updated to state that the node is unreachable. The user expected the quota to be released. If the user was at their quota limit, their application may not be able to create a new replica given the current behavior. As a result, this PR ignores pods marked for deletion that have exceeded their grace period.
**Which issue this PR fixes**
xref https://bugzilla.redhat.com/show_bug.cgi?id=1455743
fixes https://github.com/kubernetes/kubernetes/issues/52436
**Release note**:
```release-note
Ignore pods marked for deletion that exceed their grace period in ResourceQuota
```
Automatic merge from submit-queue (batch tested with PRs 52376, 52439, 52382, 52358, 52372)
Workaround go-junit-report bug for TestApps
**What this PR does / why we need it**: Fix output from pkg/kubectl/apps/TestApps unit test
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#51253
**Special notes for your reviewer**: Literally copy-pasta of the approach taken in #45320. Maybe a sign that this should be extracted into something shared. I'm just trying to see if we can make https://k8s-testgrid.appspot.com/kubernetes-presubmits and https://k8s-testgrid.appspot.com/release-master-blocking a little more green for now.
```release-note
NONE
```
Automatic merge from submit-queue
Fix swallowed errors in various volume packages
**What this PR does / why we need it**: Fixes swallowed errors in various volume packages.
**Release note**:
```release-note NONE
```
Automatic merge from submit-queue (batch tested with PRs 51601, 52153, 52364, 52362, 52342)
Minor fixes to validation test
Some test cases confuse the new object with the old object. This PR fixed that. Also added a test to verify that deletionTimestamp cannot be added (via the REST endpoints).
Automatic merge from submit-queue (batch tested with PRs 52339, 52343, 52125, 52360, 52301)
'*' is valid for allowed seccomp profiles
**What this PR does / why we need it**:
This should be valid on a PodSecurityPolicy, but is currently rejected:
```
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
```
**Which issue this PR fixes**: fixes#52300
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52339, 52343, 52125, 52360, 52301)
dockershim: check if f.Sync() returns an error and surface it
```release-note
dockershim: check the error when syncing the checkpoint.
```
Automatic merge from submit-queue (batch tested with PRs 52339, 52343, 52125, 52360, 52301)
Prevent enabling alpha APIs by default
related to #47691
This is a follow up to #51839 to add a check that we do not enable alpha APIs by default
Automatic merge from submit-queue (batch tested with PRs 48226, 52046, 52231, 52344, 52352)
Log at higher verbosity levels some common SyncPod errors
This log message was 90% of all glog.Errorf level statements reported on a production cluster, hiding other more impactful errors. We already log it in start container, but for extra caution we continue to log it at v(3) here (the downside of not logging a start container error is worse than some log spam at higher levels).
HandleError() is intended only for unknown and unexpected errors.
```release-note
NONE
```
@derekwaynecarr @sjenning
Automatic merge from submit-queue (batch tested with PRs 48226, 52046, 52231, 52344, 52352)
[BugFix] Soft Eviction timer works correctly
fixes#51516
thresholdsMet should not exclude previously met thresholds when we do not have new stats for a threshold.
/assign @vishh @derekwaynecarr
cc @kubernetes/sig-node-bugs
Automatic merge from submit-queue
fix kubectl set env --list description
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
none
```
Automatic merge from submit-queue
Azuredisk mount on windows node
**What this PR does / why we need it**:
This PR will enable azure disk on windows node, customer could create a pod mounted with azure disk on windows node.
There are a few pending items still left:
1) Current fstype would be forced as NTFS, will change if there is such requirement
2) GetDeviceNameFromMount function is not implemented(empty) because in Linux, we could use "cat /proc/mounts" to read all mounting points in OS easily, but in Windows, there is no such place, I am still figuring out. The empty function would cause a few warning logging, but it will not affect the main logic now.
**Special notes for your reviewer**:
1. This PR depends on https://github.com/kubernetes/kubernetes/pull/51240, which allow windows mount path in config validation
2. There is a bug in docker on windows(https://github.com/moby/moby/issues/34729), the ContainerPath could only be a drive letter now(e.g. D:), dir path would fail in the end.
The example pod with mount path is like below:
```
kind: Pod
apiVersion: v1
metadata:
name: pod-uses-shared-hdd-5g
labels:
name: storage
spec:
containers:
- image: microsoft/iis
name: az-c-01
volumeMounts:
- name: blobdisk01
mountPath: 'F:'
nodeSelector:
beta.kubernetes.io/os: windows
volumes:
- name: blobdisk01
persistentVolumeClaim:
claimName: pv-dd-shared-hdd-5
```
**Release note**:
```release-note
Automatic merge from submit-queue
Update set image description to remove job from resources that can update container image
**What this PR does / why we need it**:
This addressed the comment raised in https://github.com/kubernetes/kubernetes/issues/48388#issuecomment-322500960 by @harrissAvalon
**Special notes for your reviewer**:
**Release note**:
```release-note
none
```
Automatic merge from submit-queue (batch tested with PRs 51041, 52297, 52296, 52335, 52338)
Fix pagesize mount option name
**What this PR does / why we need it**:
Fixes#52337 .
Automatic merge from submit-queue (batch tested with PRs 51041, 52297, 52296, 52335, 52338)
Glusterfs expands in units of GB not GiB
When expanding glusterfs volumes, we should use GB units not GiB. More information - https://github.com/heketi/heketi/wiki/API
Fixes https://github.com/kubernetes/kubernetes/issues/52298
```release-note
Fixes Glusterfs storage allocation units
```
Automatic merge from submit-queue (batch tested with PRs 51041, 52297, 52296, 52335, 52338)
Use cAdvisor constant for crio imagefs
**What this PR does / why we need it**:
code hygiene to use a constant from cAdvisor
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52007, 52196, 52169, 52263, 52291)
Remove links to GCE/AWS cloud providers from PersistentVolumeCo…
…ntroller
**What this PR does / why we need it**:
We should be able to build a cloud-controller-manager without having to
pull in code specific to GCE and AWS clouds. Note that this is a tactical
fix for now, we should have allow PVLabeler to be passed into the
PersistentVolumeController, maybe come up with better interfaces etc. Since
it is too late to do all that for 1.8, we just move cloud specific code
to where they belong and we check for PVLabeler method and use it where
needed.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes#51629
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
fsync config checkpoint files after writing
@yujuhong brought up that it's possible for a hard reboot to result in empty checkpoint files, if they haven't been synced to disk yet. This PR ensures that Kubelet configuration checkpoints are synced after writing to avoid this issue.
fixes#52222
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 52264, 51870)
Use credentials from providers for docker sandbox image
**What this PR does / why we need it**:
Sandbox image lookup uses creds from docker config only; other credential providers are ignored. This is a regression introduced in dockershim.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#51293
**Special notes for your reviewer**:
Should also cherry-pick this to release-1.6 and release-1.7.
**Release note**:
```release-note
Fix credentials providers for docker sandbox image.
```
add initial work for mount azure file on windows
fix review comments
full implementation for attach azure file on windows node
working azure file mount
remove useless functions
add a workable implementation about mounting azure file on windows node
fix review comments and make the pod creating successful even azure file mount failed
fix according to review comments
add mount_windows_test
add implementation for IsLikelyNotMountPoint func
remove mount_windows_test.go temporaly
add back unit test for mount_windows.go
add normalizeWindowsPath func
fix normalizeWindowsPath func issue
implment azure disk on windows
update bazel BUILD
revert validation.go change as it's another PR
fix merge issue and compiling issue
fix windows compiling issue
fix according to review comments
fix according to review comments
fix cross-build failure
fix according to review comments
fix test build failure temporalily
fix darwin build failure
fix azure windows test failure
add empty implementation of MakeRShared on windows
fix gofmt errors
As per glusterfs documentation it can't create
volumes in GiB and all sizes must be specified in GB.
This code was slightly buggy because we were creating
volumes of sizes lesser than user asked for
Automatic merge from submit-queue
newline to separate unimplemented TaintEffectNoScheduleNoAdmit
**What this PR does / why we need it**:
Unimplemented `TaintEffectNoScheduleNoAdmit ` should not be treated as comments of `TaintEffectNoExecute `
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
xref #49530
**Special notes for your reviewer**:
/assign @k82cn
**Release note**:
```release-note
None
```
Automatic merge from submit-queue
Fix splitProviderID for Azure
**What this PR does / why we need it**:
#46940 add 'splitProviderID' for Azure to get node name from provider, but it captures the resource id instead of node name.
Functions such as NodeAddresses are accepting node names:
84d9778f22/pkg/cloudprovider/providers/azure/azure_instances.go (L32)
With current implementation, it takes in a resource ID, and will result in following error
```
E0830 04:15:09.877143 10427 azure_instances.go:63] error: az.NodeAddresses, az.getIPForMachine(/subscriptions/{id}/resourceGroups/{id}/providers/Microsoft.Compute/virtualMachines/k8s-master-0), err=instance not found
```
This fix makes is return node names instead.
**Which issue this PR fixes**
**Special notes for your reviewer**:
**Release note**:
`NONE`
@brendandburns @realfake @wlan0
Automatic merge from submit-queue (batch tested with PRs 52047, 52063, 51528)
implementation of GetZoneByProviderID and GetZoneByNodeName for azure
This is part of the #50926 effort
cc @luxas
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 52047, 52063, 51528)
Improve dynamic kubelet config e2e node test and fix bugs
Rather than just changing the config once to see if dynamic kubelet
config at-least-sort-of-works, this extends the test to check that the
Kubelet reports the expected Node condition and the expected configuration
values after several possible state transitions.
Additionally, this adds a stress test that changes the configuration 100
times. It is possible for resource leaks across Kubelet restarts to
eventually prevent the Kubelet from restarting. For example, this test
revealed that cAdvisor's leaking journalctl processes (see:
https://github.com/google/cadvisor/issues/1725) could break dynamic
kubelet config. This test will help reveal these problems earlier.
This commit also makes better use of const strings and fixes a few bugs
that the new testing turned up.
Related issue: #50217
I had been sitting on this until the cAdvisor fix merged in #51751, as these tests fail without that fix.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Added large topology tests for static policy in CPU Manager.
**What this PR does / why we need it**: This PR adds a very large topology test case for the CPU Manager feature.
Related to #51180.
CC @ConnorDoyle
Automatic merge from submit-queue (batch tested with PRs 50949, 52155, 52175, 52112, 52188)
Allow watch cache to be disabled per type
Currently setting watch cache size for a given resource does not disable
the watch cache. This commit adds a new `default-watch-cache-size` flag
to map to the existing field, and refactors how watch cache sizes are
calculated to bring all of the code into one place. It also adds debug
logging to startup to allow us to verify watch cache enablement in
production.
Part of #51825
Will allow watch cache to be disabled selectively.
Automatic merge from submit-queue
Add German translation for kubectl
**What this PR does / why we need it**:
This PR provides a first attempt to translate kubectl in German (related to #40645, #45573, #45562, #40591, #46559, #50155).
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
No issues
**Special notes for your reviewer**:
This PR requires German people to assist in the review. I'm native in German with BSc in Business Information Technology.
**Release note**:
```release-note
Adding German translation for kubectl
```
Automatic merge from submit-queue
ScaleIO - Specify SDC GUID value via node label
**What this PR does / why we need it**:
This is a ScaleIO plugin volume PR to do the following:
- Reads node label `scaleio.sdcGuid` value for the SDC GUID
- Uses value to look up the Scaleio SDC `instance ID`
- If label not found, falls back to current way of doing instance id look up now
This enhancement allows the ScaleIO plugin to work properly even if the drv_cfg binary is not installed on the kubelet node.
**Special Notes**
Associated issue - #51537Closes#51537
```release-note
The ScaleIO volume plugin can now read the SDC GUID value as node label scaleio.sdcGuid; if binary drv_cfg is not installed, the plugin will still work properly; if node label not found, it defaults to drv_cfg if installed.
```
We should be able to build a cloud-controller-manager without having to
pull in code specific to GCE and AWS clouds. Note that this is a tactical
fix for now, we should have allow PVLabeler to be passed into the
PersistentVolumeController, maybe come up with better interfaces etc. Since
it is too late to do all that for 1.8, we just move cloud specific code
to where they belong and we check for PVLabeler method and use it where
needed.
Fixes#51629
Currently setting watch cache size for a given resource does not disable
the watch cache. This commit adds a new `default-watch-cache-size` flag
to map to the existing field, and refactors how watch cache sizes are
calculated to bring all of the code into one place. It also adds debug
logging to startup to allow us to verify watch cache enablement in
production.
Automatic merge from submit-queue
Fix deployment timeout reporting
If the previous condition has been a successful rollout then we
shouldn't try to estimate any progress. Scenario:
* progressDeadlineSeconds is smaller than the difference between
now and the time the last rollout finished in the past.
* the creation of a new ReplicaSet triggers a resync of the
Deployment prior to the cached copy of the Deployment getting
updated with the status.condition that indicates the creation
of the new ReplicaSet.
The Deployment will be resynced and eventually its Progressing
condition will catch up with the state of the world.
Fixes https://github.com/kubernetes/kubernetes/issues/49637
I will also cherry-pick this back to 1.7.
**Release note**:
```NONE
```
Automatic merge from submit-queue (batch tested with PRs 51900, 51782, 52030)
apiservers: stratify versioned informer construction
The versioned share informer factory has been part of the GenericApiServer config,
but its construction depended on other fields of that config (e.g. the loopback
client config). Hence, the order of changes to the config mattered.
This PR stratifies this by moving the SharedInformerFactory from the generic Config
to the CompleteConfig struct. Hence, it is only filled during completion when it is
guaranteed that the loopback client config is set.
While doing this, the CompletedConfig construction is made more type-safe again,
i.e. the use of SkipCompletion() is considereably reduced. This is archieved by
splitting the derived apiserver Configs into the GenericConfig and the ExtraConfig
part. Then the completion is structural again because CompleteConfig is again
of the same structure: generic CompletedConfig and local completed ExtraConfig.
Fixes#50661.
If the previous condition has been a successful rollout then we
shouldn't try to estimate any progress. Scenario:
* progressDeadlineSeconds is smaller than the difference between
now and the time the last rollout finished in the past.
* the creation of a new ReplicaSet triggers a resync of the
Deployment prior to the cached copy of the Deployment getting
updated with the status.condition that indicates the creation
of the new ReplicaSet.
The Deployment will be resynced and eventually its Progressing
condition will catch up with the state of the world.
Signed-off-by: Michail Kargakis <mkargaki@redhat.com>