Automatic merge from submit-queue (batch tested with PRs 62756, 63862, 61419, 64015, 64063). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Use apps/v1 Deployment/ReplicaSet in controller and kubectl
This updates the Deployment controller and integration/e2e tests to use apps/v1, as part of #55714.
This also requires updating any other components that use the `deployment/util` package, most notably `kubectl`. That means client versions 1.11 and above will only work with server versions 1.9 and above. This is well within our client-server version skew policy of +/-1 minor version.
However, this PR *only* updates the parts of `kubectl` that used `deployment/util`. So although kubectl now requires apps/v1, it still also depends on extensions/v1beta1. Migrating other parts of kubectl to apps/v1 is beyond the scope of this PR, which was just to change the Deployment controller and fix all the fallout.
```release-note
kubectl: This client version requires the `apps/v1` APIs, so it will not work against a cluster version older than v1.9.0. Note that kubectl only guarantees compatibility with clusters that are +/-1 minor version away.
```
Automatic merge from submit-queue (batch tested with PRs 63049, 59731). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
re-enable nodeipam in kube-controller-manager
**What this PR does / why we need it**:
Re-enables nodeipam controller for external clouds. Also does a small refactor so that we don't need to pass in `allocateNodeCidr` into the controller.
In v1.10 we made a change (9187b343e1 (diff-f11913dc67d80d36b3d06a93f61c49cf) in https://github.com/kubernetes/kubernetes/pull/57492) where nodeipam would be disabled for any cluster that sets `--cloud-provider=external`. The original intention behind this was that the nodeipam controller is cloud specific for some clouds (only GCE at the moment) so it should be moved to the CCM (cloud controller manager). After some discussions with wg-cloud-provider it makes sense to re-enable nodeipam controller in KCM and have GCE CCM enable its own cloud-specific IPAM controller as part of [Initialize()](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/cloud.go#L33-L35). This would allow for GCE to run nodeipam in both KCM (by setting --cloud-provider=gce and --allocate-node-cidr) and in the CCM (once implemented in `Initialize()`) without disabling nodeipam in the KCM for all external clouds and avoids having to implement nodeipam in CCM.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Re-enable nodeipam controller for external clouds.
```
Automatic merge from submit-queue (batch tested with PRs 63526, 60371, 63444). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
update garbage collection to use the new dynamic client
Update GC to use the new and easy to use dynamic client. This is one of two remaining stragglers.
@kubernetes/sig-api-machinery-pr-reviews
@caesarxuchao @ironcladlou
```release-note
NONE
```
https://github.com/kubernetes/kubernetes/pull/62913 switched from using a client pool, where each groupVersionResource got its own rest client, to a single client.
This increases the QPS to account for increased requests using a single rest client rate limiter.
After K8s 1.10 is upgraded to K8s 1.11 finalizer [kubernetes.io/pvc-protection] is added to PVCs
because StorageObjectInUseProtection feature will be GA in K8s 1.11.
However, when K8s 1.11 is downgraded to K8s 1.10 and the StorageObjectInUseProtection feature is disabled
the finalizers remain in the PVCs and as pvc-protection-controller is not started in K8s 1.10 finalizers
are not removed automatically from deleted PVCs and that's why deleted PVC are not removed from the system
but remain in Terminating phase.
The same applies to pv-protection-controller and [kubernetes.io/pvc-protection] finalizer in PVs.
That's why pvc-protection-controller is always started because the pvc-protection-controller removes finalizers
from PVCs automatically when a PVC is not in active use by a pod.
Also the pv-protection-controller is always started to remove finalizers from PVs automatically when a PV is not
Bound to a PVC.
Related issue: https://github.com/kubernetes/kubernetes/issues/60764
Automatic merge from submit-queue (batch tested with PRs 61306, 60270, 62496, 62181, 62234). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
split up the huge set of flags into smaller option structs
**What this PR does / why we need it**:
To make generic, we do following work:
1. Spliting `KubeControllerManagerConfiguration` in kube-controller-manager and cloud-controller-manager into fewer smaller struct options order by controller, and modify relative flag. Also part of #59483.
2. Spliting `componentconfig` in controller-manager into fewer smaller config order by controller too.
All works follow #59582, using `option+config` logic.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Thanks to some great sleuthing by ikruglov!
kube-controller-manager defaults --leader-elect to true. We should
do the same for kube-scheduler. kube-scheduler used to have this
set to true, but it got lost during refactoring in:
efb2bb71cd
Automatic merge from submit-queue (batch tested with PRs 60455, 61365, 61375, 61597, 61491). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Check apps/v1 StatefulSet available before starting its controller
**What this PR does / why we need it**: StatefulSet controller was bumped to use `apps/v1.StatefulSet` already. Without this change, StatefulSet controller will continue to work, but will be broken when `apps/v1beta2.StatefulSet` is removed or disabled.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 60376, 55584, 60358, 54631, 60291). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
cloud-controller-manager get /healthz to wait for apiserver to be healthy
**What this PR does / why we need it**:
currently cloud-controller-manager use `restclient.ServerAPIVersions()` to wait for apiserver to be healthy.
Remove ServerAPIVersions and make use of /healthz as all other components do.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#60288
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 60054, 60202, 60219, 58090, 60275). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fixes for HTTP/2 max streams per connection setting
**What this PR does / why we need it**:
This PR makes two changes. One is to introduce a parameter
for the HTTP/2 setting that an api-server sends to its clients
telling them how many streams they may have concurrently open in
an HTTP/2 connection. If left at its default value of zero,
this means to use the default in golang's HTTP/2 code (which
is currently 250; see https://github.com/golang/net/blob/master/http2/server.go).
The other change is to make the recommended options for an aggregated
api-server set this limit to 1000. The limit of 250 is annoyingly low
for the use case of many controllers watching objects of Kinds served
by an aggregated api-server reached through the main api-server (in
its mode as a proxy for the aggregated api-server, in which it uses a
single HTTP/2 connection for all calls proxied to that aggregated
api-server).
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#60042
**Special notes for your reviewer**:
**Release note**:
```release-note
Introduced `--http2-max-streams-per-connection` command line flag on api-servers and set default to 1000 for aggregated API servers.
```
Automatic merge from submit-queue (batch tested with PRs 59286, 59743, 59883, 60190, 60165). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
DaemonSet Controller and tests to apps/v1
**What this PR does / why we need it**:
Updates the DaemonSet controller, its integration tests, and its e2e tests to use the apps/v1 API.
**Release note**:
```release-note
The DaemonSet controller, its integration tests, and its e2e tests, have been updated to use the apps/v1 API.
```
Automatic merge from submit-queue (batch tested with PRs 59463, 59719, 60181, 58283, 59966). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
components pprof profiling make use of existing genericapiserver's
**What this PR does / why we need it**:
fix#60278
Instead of writing private pprof, all components make use of generic apiserver existing profiling.
**Release note**:
```release-note
NONE
```
This PR makes two changes. One is to introduce a parameter
for the HTTP/2 setting that an api-server sends to its clients
telling them how many streams they may have concurrently open in
an HTTP/2 connection. If left at its default value of zero,
this means to use the default in golang's HTTP/2 code (which
is currently 250).
The other change is to make the recommended options for an aggregated
api-server set this limit to 1000. The limit of 250 is annoyingly low
for the use case of many controllers watching objects of Kinds served
by an aggregated api-server reached through the main api-server (in
its mode as a proxy for the aggregated api-server, in which it uses a
single HTTP/2 connection for all calls proxied to that aggregated
api-server).
Fixes#60042
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Update bazelbuild/rules_go, kubernetes/repo-infra, and gazelle dependencies
**What this PR does / why we need it**: updates our bazelbuild/rules_go dependency in order to bump everything to go1.9.4. I'm separating this effort into two separate PRs, since updating rules_go requires a large cleanup, removing an attribute from most build rules.
**Release note**:
```release-note
NONE
```
With d7ddcca231, we lost the logging
of the flags. We should at least log what the command line flags
were used to start processes as those incredibly useful for trouble shooting.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove CSI plugin from ProbeExpandableVolumePlugins
Add CSI plugin when feature gate is enabled
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
/sig storage
/assign @vladimirvivien
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove controller-manager --service-sync-period flag
**What this PR does / why we need it**:
This PR removes controller manager --service-sync-period flag which is not used anywhere in the code and is causing confusion
**Which issue(s) this PR fixes**
https://github.com/kubernetes/kubernetes/issues/58776
**Special notes for your reviewer**:
@deads2k this remove the flag as per the discussion on #58776
2 commits
1. one for code change
2. one for auto generated code
**Release note**:
```release-note
1. Controller-manager --service-sync-period flag is removed (was never used in the code).
```
Add a separate method in a new file for creating cloud providers.
Currently the code is all mixed into the controller manager. We
should actively control what is made available to the cloud provider
so list explicitly the parms needed and move the code out. This will
avoid linkages to sneak in as we will catch it better during reviews.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Do not recycle volumes that are used by pods
**What this PR does / why we need it**:
Recycler should wait until all pods that use a volume are finished.
Consider this scenario:
1. User creates a PVC that's bound to a NFS PV.
2. User creates a pod that uses the PVC
3. User deletes the PVC.
Now the PV gets `Released` (the PVC does not exists) and recycled, however the PV is still mounted to a running pod. PVC protection won't help us, because it puts finalizers on PVC that is under user's control and user can remove it.
This PR checks that there is no pod that uses a PV before it recycles it.
**Release note**:
```release-note
NONE
```
/sig storage