Automatic merge from submit-queue (batch tested with PRs 46734, 46810, 46759, 46259, 46771)
Added node to persistent-volume-binder clusterrole
**What this PR does / why we need it**: Added missing permission to volume-binder clusterrole
**Which issue this PR fixes**: fixes#46770
**Special notes for your reviewer**: Non
**Release note**: Non
Automatic merge from submit-queue (batch tested with PRs 46734, 46810, 46759, 46259, 46771)
Add iptables lock-file mount to kube-proxy manifest
**What this PR does / why we need it**: kube-proxy is broken in make bazel-release. The new iptables binary uses a lockfile in "/run", but the directory doesn't exist. This causes iptables-restore to fail. We need to share the same lock-file amongst all containers, so mount the host /run dir.
This is similar to #46132 but expediency matters, since builds are broken.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#46103
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 46734, 46810, 46759, 46259, 46771)
Improve code coverage for pkg/kubelet/images/image_gc_manager
**What this PR does / why we need it**:
#39559#40780
code coverage from 74.5% to 77.4%
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 46734, 46810, 46759, 46259, 46771)
OpenAPI aggregation for kube-aggregator
This PR implements OpenAPI aggregation layer for kube-aggregator. On each API registration, it tries to download swagger.spec of the user api server. On failure it will try again next time (either on another add or get /swagger.* on aggregator server) up to five times. To merge specs, it first remove all unrelated paths from the downloaded spec (anything other than group/version of the API service) and then remove all unused definitions. Adding paths are straightforward as they won't have any conflicts, but definitions will most probably have conflicts. To resolve that, we would reused any definition that is not changed (documentation changes are fine) and rename the definition otherwise.
To use this PR, kube aggregator should have nonResourceURLs (for get verb) to user apiserver.
```release-note
Support OpenAPI spec aggregation for kube-aggregator
```
fixes: #43717
Automatic merge from submit-queue (batch tested with PRs 45871, 46498, 46729, 46144, 46804)
PD e2e test: Ready node check now uses the most up-to-date node count.
Follow-up to PR #46746
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
Automatic merge from submit-queue (batch tested with PRs 45871, 46498, 46729, 46144, 46804)
Implement kubectl rollout undo and history for DaemonSet
~Depends on #45924, only the 2nd commit needs review~ (merged)
Ref https://github.com/kubernetes/community/pull/527/
TODOs:
- [x] kubectl rollout history
- [x] sort controller history, print overview (with revision number and change cause)
- [x] print detail view (content of a history)
- [x] print template
- [x] ~(do we need to?) print labels and annotations~
- [x] kubectl rollout undo:
- [x] list controller history, figure out which revision to rollback to
- if toRevision == 0, rollback to the latest revision, otherwise choose the history with matching revision
- [x] update the ds using the history to rollback to
- [x] replace the ds template with history's
- [x] ~(do we need to?) replace the ds labels and annotations with history's~
- [x] test-cmd.sh
@kubernetes/sig-apps-pr-reviews @erictune @kow3ns @lukaszo @kargakis @kubernetes/sig-cli-maintainers
---
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 45871, 46498, 46729, 46144, 46804)
Enable some pod-related admission plugins for kubemark
Ref https://github.com/kubernetes/kubernetes/issues/44701
This should help reduce discrepancy in "list pods" latency wrt real cluster. Let's see.
/cc @wojtek-t @gmarek
Automatic merge from submit-queue (batch tested with PRs 45871, 46498, 46729, 46144, 46804)
Fix some comments in dnsprovider
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
delete the useless "gv" in Errorf
Signed-off-by: yupengzte <yu.peng36@zte.com.cn>
**What this PR does / why we need it**:
Fix "no formatting directive in Errorf call"
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 43852, 44255)
Bump github.com/mitchellh/mapstructure
**What this PR does / why we need it**:
This PR bump revision of github.com/mitchellh/mapstructure.
The library is required by Gophercloud, also they has passed tests with the newer revision.
So, since Gophercloud is update, please also renew this library.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue
Add SuccessfulMountVolume message to the events of pod
**What this PR does / why we need it:**
When creating a pod with volume, the volume mount may failed at first, but eventually succeed after retry several times. kubectl describe pod can only see the failed messages, so i think it will be better to add the SuccessfulMountVolume message to the pod events too.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes#42867
This is part of the namespace deletion big hammer. `kubefed join` not
just creates federation-system namespace, but also cluster role and
cluster role bindings in the joining clusters. Sometimes unjoin fails
to delete them. So we use a big hammer here to delete them.
This smells like a real problem in kubefed and needs investigation.
This is a short term fix to unblock the submit queue.
This is a big hammer. `kubefed join` creates federation-system namespace
in the joining clusters if they don't already exist. This namespace
usually exists in the host cluster and hence cannot be deleted while
unjoining. So in order to be safe, we don't delete the federation-system
namespace from any federated cluster while unjoining them. This causes
a problem in our test environment if certain resources are left in the
namespace. Therefore we are deleting all federation-system namespace in
all the clusters.
PV is a non-namespaced resource. Running `kubectl delete pv --all`, even
with `--namespace` is going to delete all the PVs in the cluster. This
is a dangerous operation and should not be deleted this way.
Instead we now retrieve the PVs bound to the PVCs in the namespace we
are deleteing and delete only those PVs.
Fixes issue #46380.
Automatic merge from submit-queue
Respect PDBs during node upgrades and add test coverage to the ServiceTest upgrade test.
This is still a WIP... needs to be squashed at least, and I don't think it's currently passing until I increase the scale of the RC, but please have a look at the general outline. Thanks!
Fixes#38336
@kow3ns @bdbauer @krousey @erictune @maisem @davidopp
```
On GCE, node upgrades will now respect PodDisruptionBudgets, if present.
```
Automatic merge from submit-queue
Delete all dead containers and sandboxes when under disk pressure.
This PR modifies the eviction manager to add dead container and sandbox garbage collection as a resource reclaim function for disk. It also modifies the container GC logic to allow pods that are terminated, but not deleted to be removed.
It still does not delete containers that are less than the minGcAge. This should prevent nodes from entering a permanently bad state if the entire disk is occupied by pods that are terminated (in the state failed, or succeeded), but not deleted.
There are two improvements we should consider making in the future:
- Track the disk space and inodes reclaimed by deleting containers. We currently do not track this, and it prevents us from determining if deleting containers resolves disk pressure. So we may still evict a pod even if we are able to free disk space by deleting dead containers.
- Once we can track disk space and inodes reclaimed, we should consider only deleting the containers we need to in order to relieve disk pressure. This should help avoid a scenario where we try and delete a massive number of containers all at once, and overwhelm the runtime.
/assign @vishh
cc @derekwaynecarr
```release-note
Disk Pressure triggers the deletion of terminated containers on the node.
```
Automatic merge from submit-queue
Delete meaningless check
**What this PR does / why we need it**:
Delete meaningless check
The deleted check is redundant.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 46681, 46786, 46264, 46680, 46805)
Enable Dialer on the Aggregator
Centralize the creation of the dialer during startup.
Have the dialer then passed in to both APIServer and Aggregator.
Aggregator the uses the dialer as its Transport base.
**What this PR does / why we need it**:Enables the Aggregator to use the Dialer/SSHTunneler to connect to the user-apiserver.
**Which issue this PR fixes** : fixes ##46679
**Special notes for your reviewer**:
**Release note**: None
Automatic merge from submit-queue (batch tested with PRs 46681, 46786, 46264, 46680, 46805)
Add annotation for image policy webhook fail open.
**What this PR does / why we need it**: there's no good way to audit log if binary verification fails open. Adding an annotation can solve that, and provide a useful tool to audit [non-malicious] containers.
**Release note**: add the annotation "alpha.image-policy.k8s.io/failed-open=true" to pods created when the image policy webhook fails open.
```release-note
Add the `alpha.image-policy.k8s.io/failed-open=true` annotation when the image policy webhook encounters an error and fails open.
```
Automatic merge from submit-queue (batch tested with PRs 46681, 46786, 46264, 46680, 46805)
Added sig leads alias to OWNERS_ALIAS
**What this PR does / why we need it**:
Add sig leads section to OWNERS_ALIASES so sig leads without maintainers access can add status labels to issues (important to 1.7 milestone)
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
Let me know you know of additional sig leads that should be added.
**Release note**:
```NONE
```
Automatic merge from submit-queue (batch tested with PRs 46681, 46786, 46264, 46680, 46805)
Fix for-loop and err definition
**What this PR does / why we need it**:
we can use j directly, it's odd to use i then get j through i.
we can put err definition into if{} , after all the para. was only used in if{}.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 40760, 46706, 46783, 46742, 46751)
Pre-generate SNI test certs
Pre-generates test certs for SNI tests, since doing this dynamically can take a loooong time in entropy-starved or CPU-bound test envs (like in a container).
Automatic merge from submit-queue (batch tested with PRs 40760, 46706, 46783, 46742, 46751)
complete the controller context for init funcs
This completes the conversion to initFuncs for the controller initialization to make easier and more manageable to add them.
Automatic merge from submit-queue (batch tested with PRs 40760, 46706, 46783, 46742, 46751)
Fix unit test for kubectl create role
When expected err is not nil but error deos not happen, we should report error in unit test.
**Release note**:
```
NONE
```
Automatic merge from submit-queue
Implement Daemonset history
~Depends on #45867 (the 1st commit, ignore it when reviewing)~ (already merged)
Ref https://github.com/kubernetes/community/pull/527/ and https://github.com/kubernetes/community/pull/594
@kubernetes/sig-apps-api-reviews @kubernetes/sig-apps-pr-reviews @erictune @kow3ns @lukaszo @kargakis
---
TODOs:
- [x] API changes
- [x] (maybe) Remove rollback subresource if we decide to do client-side rollback
- [x] deployment controller
- [x] controller revision
- [x] owner ref (claim & adoption)
- [x] history reconstruct (put revision number, hash collision avoidance)
- [x] de-dup history and relabel pods
- [x] compare ds template with history
- [x] hash labels (put it in controller revision, pods, and maybe deployment)
- [x] clean up old history
- [x] Rename status.uniquifier when we reach consensus in #44774
- [x] e2e tests
- [x] unit tests
- [x] daemoncontroller_test.go
- [x] update_test.go
- [x] ~(maybe) storage_test.go // if we do server side rollback~
kubectl part is in #46144
---
**Release note**:
```release-note
```
Automatic merge from submit-queue
reset resultRun on pod restart
xref https://bugzilla.redhat.com/show_bug.cgi?id=1455056
There is currently an issue where, if the pod is restarted due to liveness probe failures exceeding failureThreshold, the failure count is not reset on the probe worker. When the pod restarts, if the liveness probe fails even once, the pod is restarted again, not honoring failureThreshold on the restart.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
command:
- sleep
- "3600"
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
timeoutSeconds: 1
periodSeconds: 3
successThreshold: 1
failureThreshold: 5
terminationGracePeriodSeconds: 0
```
Before this PR:
```
$ kubectl create -f busybox-probe-fail.yaml
pod "busybox" created
$ kubectl get pod -w
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 4s
busybox 1/1 Running 1 24s
busybox 1/1 Running 2 33s
busybox 0/1 CrashLoopBackOff 2 39s
```
After this PR:
```
$ kubectl create -f busybox-probe-fail.yaml
$ kubectl get pod -w
NAME READY STATUS RESTARTS AGE
busybox 0/1 ContainerCreating 0 2s
busybox 1/1 Running 0 4s
busybox 1/1 Running 1 27s
busybox 1/1 Running 2 45s
```
```release-note
Fix kubelet reset liveness probe failure count across pod restart boundaries
```
Restarts are now happen at even intervals.
@derekwaynecarr
Automatic merge from submit-queue (batch tested with PRs 46782, 46719, 46339, 46609, 46494)
Do not log the content of pod manifest if parsing fails.
**What this PR does / why we need it**:
- ~~only accepts text/plain config file~~
- ~~not log config file content when it's invalid~~
Do not log the content of pod manifest if parsing fails.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#46493
**Special notes for your reviewer**:
/cc @yujuhong
@sig-node-reviewers
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 46782, 46719, 46339, 46609, 46494)
Fix inconsistency in finding cni binaries
Fixes [#46476]
Signed-off-by: Abhinav Dahiya <abhinav.dahiya@coreos.com>
**What this PR does / why we need it**:
This fixes the inconsistency in finding the appropriate cni binaries.
Currently `lo` cniNetwork follows vendorCniDir > binDir whereas default for all others is binDir > vendorCniDir. This PR makes vendorCniDir > binDir as default behavior.
**Why we need it**:
Hypercube right now ships cni binaries in /opt/cni/bin.
And to use latest version of calico you need to override kubelet's /opt/cni/bin from host which means all other cni plugins (flannel, loopback etc...) have to be mounted from host too. Keeping vendordir at higher order allows easy installation of newer versions of plugins.
Automatic merge from submit-queue (batch tested with PRs 46782, 46719, 46339, 46609, 46494)
update default translation of annotations
**What this PR does / why we need it**:
```
using the local cluster. the help of kubectl is not corrent
# ./cluster/kubectl.sh
.......
Settings Commands:
label Update the labels on a resource
annotate Update the annotations on a resourcewatch is only supported on individual resources and resource
collections - %d resources were found
completion Output shell completion code for the specified shell (bash or zsh)
Other Commands:
api-versions Print the supported API versions on the server, in the form of "group/version"
config Modify kubeconfig files
help Help about any command
plugin Runs a command-line plugin
version Print the client and server version information
Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
```
**Which issue this PR fixes**:
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 46782, 46719, 46339, 46609, 46494)
Support custom domains in the cockroachdb example's init container
This switches from using v0.1 of the peer-finder image to a version that
includes https://github.com/kubernetes/contrib/pull/2013
While I'm here, switch the version of cockroachdb from 1.0 to 1.0.1
```release-note
NONE
```
@tschottdorf
Automatic merge from submit-queue
Add some initial resource limits to the ip-masq-agent.
These limits were based on observing the agent over roughly a day RES was typically ~4M for me but I'd like to make sure we have some headroom. If there was a huge config map then this could increase slightly but not significantly since we only allow 64 entries.
VmPeak: 11164 kB
VmSize: 11164 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 7652 kB
VmRSS: 4260 kB
VmData: 7612 kB
VmStk: 136 kB
VmExe: 1856 kB
VmLib: 0 kB
VmPTE: 40 kB
VmPMD: 20 kB
VmSwap: 0 kB
Automatic merge from submit-queue (batch tested with PRs 46620, 46732, 46773, 46772, 46725)
Fix AppArmor test for docker 1.13
... & better debugging.
The issue is that we run the pod containers in a shared PID namespace with docker 1.13, so PID 1 is no longer the container's root process. Since it's messy to get the container's root process, I switched to using `/proc/self` to read the apparmor profile. While this wouldn't catch a regression that caused only the init process to run with the wrong profile, I think it's a good approximation.
/cc @aulanov @Amey-D
Automatic merge from submit-queue (batch tested with PRs 46620, 46732, 46773, 46772, 46725)
Added missing documentation to NodeInstanceGroup.
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```