Automatic merge from submit-queue (batch tested with PRs 55792, 58342). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Promote Statefulset controller and its e2e tests to use apps/v1
**What this PR does / why we need it**:
Promotes the statefulset controller to use to use the latest apps group [apps/v1](https://github.com/kubernetes/kubernetes/pull/53679)
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes # https://github.com/kubernetes/kubernetes/issues/55714
**Special notes for your reviewer**:
* Listerexpansion for v1 `k8s.io/client-go/listers/apps/v1` (was recently done for v1beta2)
* `v1beta2` && `v1` had `ObservedGeneration` as `int64` where as `v1beta1` and rest of the code (including conversion) is expecting `ObservedGeneration` to be `*int64`
```
type StatefulSetStatus struct {
// observedGeneration is the most recent generation observed for this StatefulSet. It corresponds to the
// StatefulSet's generation, which is updated on mutation by the API Server.
// +optional
ObservedGeneration int64 `json:"observedGeneration,omitempty" protobuf:"varint,1,opt,name=observedGeneration"`
```
* for kubectl's `rollback` and `history` commands a couple functions have been duplicated to allow us to use `v1` version instead of `v1beta1` for statefulsets, while the older functions are still used by other controllers.
We should be able to remove these duplicates once all the controllers are moved.
If this aligns with the plan then i could move other controllers too.
cc: @kow3ns
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 58626, 58791). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
serviceaccount: check token is issued by correct iss before verifying
Right now if a JWT for an unknown issuer, for any subject hits the
serviceaccount token authenticator, we return a errors as if the token
was meant for us but we couldn't find a key to verify it. We should
instead return nil, false, nil.
This change helps us support multiple service account token
authenticators with different issuers.
https://github.com/kubernetes/kubernetes/issues/58790
```release-note
NONE
```
Right now if a JWT for an unknown issuer, for any subject hits the
serviceaccount token authenticator, we return a errors as if the token
was meant for us but we couldn't find a key to verify it. We should
instead return nil, false, nil.
This change helps us support multiple service account token
authenticators with different issuers.
Automatic merge from submit-queue (batch tested with PRs 58411, 58407, 52863). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
low hanging fruit for using cobra commands
This makes the simple updates to use cobra commands instead of individual ones
/assign liggitt
/assign ncdc
/assign sttts
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
use shared informers for BootstrapSigner controller
**What this PR does / why we need it**:
fix TODO: Switch to shared informers
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Prepatory work fpr removing cloud provider dependency from node
controller running in Kube Controller Manager. Splitting the node
controller into its two major pieces life-cycle and CIDR/IP
management. Both pieces currently need the the cloud system to do their work.
Removing lifecycles dependency on cloud will be fixed ina followup PR.
Moved node scheduler code to live with node lifecycle controller.
Got the IPAM/Lifecycle split completed. Still need to rename pieces.
Made changes to the utils and tests so they would be in the appropriate
package.
Moved the node based ipam code to nodeipam.
Made the relevant tests pass.
Moved common node controller util code to nodeutil.
Removed unneeded pod informer sync from node ipam controller.
Fixed linter issues.
Factored in feedback from @gmarek.
Factored in feedback from @mtaufen.
Undoing unneeded change.
Automatic merge from submit-queue (batch tested with PRs 57651, 56411, 56779, 57523, 57624). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Use authentication client with explicit version
**What this PR does / why we need it**:
Authentication client without explicit version has been deprecated, change them to the one with explicit version.
**Which issue(s) this PR fixes**:
Fixes partially #55993
**Special notes for your reviewer**:
/cc @caesarxuchao @sttts
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix garbage collector when leader-elect=false
**What this PR does / why we need it**:
In a 1.8.x master with --leader-elect=false, the garbage collector controller
does not work.
When deleting a deployment with v1meta.DeletePropagationForeground, the deployment
had its deletionTimestamp set and a foreground Deletion finalizer was added,
but the deployment, rs and pod were not deleted.
This is an issue with how the garbage collector graph_builder behaves when the
stopCh=nil. This PR creates a dummy stop channel for the garbage collector controller (and other
controllers started by the controller-manager) so that they can work more like they do when
when the controller-manager is configured with --leader-elect=true.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#57044
**Special notes for your reviewer**:
**Release note**:
```release-note
Fix garbage collection when the controller-manager uses --leader-elect=false
```
Seperate loop and plugin control in the kube-controller-manager.
Adding an "--external-plugin" flag to specify a plugin to load when
cloud-provider is set to "external". Flag has no effect currently
when the cloud-provider is not set to external. The expectation is
that the cloud provider and external plugin flags would go away once
all cloud providers are on stage 2 cloud-controller-manager solutions.
Managing the control loops more directly based on start up flags.
Addressing issue brought up by @wlan0
Switched to using the main node controller in CCM.
Changes to enable full NodeController to start in CCM.
Fix related tests.
Unifying some common code between KCM and CCM.
Fix related tests and comments.
Folded in feedback from @jhorwit2 and @wlan0
**What this PR does / why we need it**:
In a 1.8.x master with --leader-elect=false, the garbage collector controller
does not work.
When deleting a deployment with v1meta.DeletePropagationForeground, the deployment
had its deletionTimestamp set and a foreground Deletion finalizer was added,
but the deployment, rs and pod were not deleted.
This is an issue with how the garbage collector graph_builder behaves when the
stopCh=nil. This PR creates a dummy stop channel for the garbage collector controller (and other
controllers started by the controller-manager) so that they can work more like they do when
when the controller-manager is configured with --leader-elect=true.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#57044
**Special notes for your reviewer**:
**Release note**:
<!-- Write your release note:
1. Enter your extended release note in the below block. If the PR requires additional action from users switching to the new release, include the string "action required".
2. If no release note is required, just write "NONE".
-->
```release-note
Garbage collection doesn't work when the controller-manager uses --leader-elect=false
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix failure to load volume plugins for #52048
Currently we have two plugin managers.
However one of them limits the cloud plugins it loads.
This means that if cloud provider is set to external the plugins will
not be loaded in *that* plugin manager. However they will be loaded in
the other instance of the plugin manager. So it does not actually save
us anything. It does hamper the efforts to actually get stage 1
separation working.
**What this PR does / why we need it**: It allows the plugins be found for the cloud providers working on stage 1 separation.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52048
**Special notes for your reviewer**:
**Release note**:
```release-note NONE
```
running (renamed to GetAllCurrentZones). Added E2E test to confirm this
behavior.
Added node informer to cloud-provider controller to keep track of zones
with k8s nodes in them.
Automatic merge from submit-queue (batch tested with PRs 55331, 55272, 55228, 49763, 55242). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
use versiond group clients from client-go
**What this PR does / why we need it**:
Some **Deprecated** group clients are still used, replace them with versioned group clients.
**Which issue this PR fixes**: fixes#49760
**Special notes for your reviewer**:
/assign @caesarxuchao
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Tolerate partial discovery in garbage collector
Allow the garbage collector to tolerate partial discovery failures. On a
partial failure, use whatever was discovered, log the failures, and
allow the resync logic to try again later.
Fixes#55022.
```release-note
API discovery failures no longer crash the kube controller manager via the garbage collector.
```
/cc @caesarxuchao
Allow the garbage collector to tolerate partial discovery failures. On a
partial failure, use whatever was discovered, log the failures, and
allow the resync logic to try again later.
Fixes#55022.
Automatic merge from submit-queue (batch tested with PRs 53592, 52562, 55175, 55213). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Check RegisterMetricAndTrackRateLimiterUsage error when starting BootstrapSigner & TokenCleaner controllers
**What this PR does / why we need it**:
Prevent `BootstrapSigner` and `TokenCleaner` controllers to start if `metrics.RegisterMetricAndTrackRateLimiterUsage` returns an error.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: complements #53571
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
ClusterCIDR and ServiceCIDR are settings that are only used if at least
AllocateNodeCIDRs is set. The route controller requires in addition to
it for ConfigureCloudRoutes to be true as well. Since
AllocateNodeCIDRs is by default false, if guard the parsing of these
settings in order to not unnecessarily spam logs. Amend the
documentation of kube-controller-manager for the 2 settings to point
out the requirement of AllocateNodeCIDRs to be true as well
1) Modify rbdPlugin to implement volume.AttachableVolumePlugin
interface.
2) Add rbdAttacher/rbdDetacher structs to implement
volume.Attacher/Detacher interfaces.
3) Add mount.SafeFormatAndMount/mount.Exec fields to rbdPlugin, and
setup them in rbdPlugin.Init for later uses.
Attacher/Mounter/Unmounter/Detacher reference rbdPlugin to use mounter
and exec. This simplifies code.
4) Add testcase struct to abstract RBD Plugin test case, etc.
5) Add newRBD constructor to unify rbd struct initialization.
This updates the HPA controller to use the polymorphic scale client from
client-go. This should enable HPAs to work with arbitrary scalable
resources, instead of just those in the extensions API group (meaning we
can deprecate the copy of ReplicationController in extensions/v1beta1).
It also means that the HPA controller now pays attention to the
APIVersion field in `scaleTargetRef` (more specifically, the group part
of it).
Note that currently, discovery information on which resources are
available where is only fetched once (the first time that it's
requested). In the future, we may want a refreshing discovery REST
mapper.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Make HPA tolerance a flag
**What this PR does / why we need it**:
Make HPA tolerance configurable as a flag. This change allows us to use
different tolerance values in production/testing.
**Which issue this PR fixes**:
Fixes#18155
**Release note:**
```release-note
Control HPA tolerance through the `horizontal-pod-autoscaler-tolerance` flag.
```
Signed-off-by: mattjmcnaughton <mattjmcnaughton@gmail.com>
Using assertions for unit tests:
1. cmd/kube-controller-manager/app/controller_manager_test.go
2. pkg/controller/bootstrap/jws_test.go
3. pkg/controller/cloud/node_controller_test.go
4. pkg/controller/controller_utils_test.go
Automatic merge from submit-queue (batch tested with PRs 51034, 53239). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix conditional for warning while starting KCM without secret file
@liggitt @spiffxp @lavalamp
Fixes#53291
A small bug was introduced in this PR - https://github.com/kubernetes/kubernetes/pull/50288, where the warning message is printed when the file is specified, and it is not printed if it is left blank - exactly the opposite of the intended behavior.
This fixes that.
```
release-note-none
```
Fix#18155
Make HPA tolerance configurable as a flag. This change allows us to use
different tolerance values in production/testing.
Signed-off-by: mattjmcnaughton <mattjmcnaughton@gmail.com>
Automatic merge from submit-queue (batch tested with PRs 52751, 52898, 52633, 52611, 52609). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Add default value for RouteReconciliationPeriod in cloud controller manager
**What this PR does / why we need it**:
Add default sync period value config for RouteReconciliationPeriod. For now the default value is 0, which means zero cooldown time.
The value is taken from kube-controller-manager:
b2b079b95a/cmd/kube-controller-manager/app/options/options.go (L73)
**Which issue this PR fixes**
**Special notes for your reviewer**:
**Release note**:
Automatic merge from submit-queue (batch tested with PRs 52445, 52380, 52516, 52531, 52538). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
remove repeated import'k8s.io/client-go/kubernetes' in controllermana…
**What this PR does / why we need it**:
There are duplicate importing "k8s.io/client-go/kubernetes", we just need 'clientset'.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 50378, 51463, 50006, 51962, 51673). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
adding kube-controller-manager starting option tests
**What this PR does / why we need it**:
The unit test for kube-controller-manager is missing in `cmd/kube-controller-manager/app/options`. I have added a unit test for checking kube-controller-manager starting options, which is similar to the unit test in `cmd/kube-apiserver/app/options/options_test.go`. This PR https://github.com/kubernetes/kubernetes/pull/49092 can be seen as a reference.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
`NONE`
Currently we have two plugin managers.
However one of them limits the cloud plugins it loads.
This means that if cloud provider is set to external the plugins will
not be loaded in *that* plugin manager. However they will be loaded in
the other instance of the plugin manager. So it does not actually save
us anything. It does hamper the efforts to actually get stage 1
separation working.
Automatic merge from submit-queue (batch tested with PRs 51574, 51534, 49257, 44680, 48836)
Task 1: Tainted node by condition.
**What this PR does / why we need it**:
Tainted node by condition for MemoryPressure, OutOfDisk and so on.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: part of #42001
**Release note**:
```release-note
Tainted nodes by conditions as following:
* 'node.kubernetes.io/network-unavailable=:NoSchedule' if NetworkUnavailable is true
* 'node.kubernetes.io/disk-pressure=:NoSchedule' if DiskPressure is true
* 'node.kubernetes.io/memory-pressure=:NoSchedule' if MemoryPressure is true
* 'node.kubernetes.io/out-of-disk=:NoSchedule' if OutOfDisk is true
```
Automatic merge from submit-queue (batch tested with PRs 51707, 51662, 51723, 50163, 51633)
update GC controller to wait until controllers have been initialized …
fixes#51013
Alternative to https://github.com/kubernetes/kubernetes/pull/51492 which keeps those few controllers (only one) from starting the informers early.
Automatic merge from submit-queue (batch tested with PRs 50775, 51397, 51168, 51465, 51536)
Enable batch/v1beta1.CronJobs by default
This PR moves to CronJobs beta entirely, enabling `batch/v1beta1` by default.
Related issue: #41039
@erictune @janetkuo ptal
```release-note
Promote CronJobs to batch/v1beta1.
```
Automatic merge from submit-queue (batch tested with PRs 49642, 50335, 50390, 49283, 46582)
Improve GC discovery sync performance
Improve GC discovery sync performance by only syncing when discovered
resource diffs are detected. Before, the GC worker pool was shut down
and monitors resynced unconditionally every sync period, leading to
significant processing delays causing test flakes where otherwise
reasonable GC timeouts were being exceeded.
Related to https://github.com/kubernetes/kubernetes/issues/49966.
/cc @kubernetes/sig-api-machinery-bugs
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 50016, 49583, 49930, 46254, 50337)
Break up node controller into packages
This change does NO actual code changes other than moving constituent
parts into packages.
```release-note
NONE
```
Improve GC discovery sync performance by only syncing when discovered
resource diffs are detected. Before, the GC worker pool was shut down
and monitors resynced unconditionally every sync period, leading to
significant processing delays causing test flakes where otherwise
reasonable GC timeouts were being exceeded.
Related to https://github.com/kubernetes/kubernetes/issues/49966.
Automatic merge from submit-queue (batch tested with PRs 50173, 50324, 50288, 50263, 50333)
Honor --use-service-account-credentials and warn when missing private key
Fixes#50275 by logging a warning and failing to start rather than continue to run ignoring the user's specified config
- Move public key functions to client-go/util/cert
- Move pki file helper functions to client-go/util/cert
- Standardize on certutil package alias
- Update dependencies to client-go/util/cert