Seperate loop and plugin control in the kube-controller-manager.
Adding an "--external-plugin" flag to specify a plugin to load when
cloud-provider is set to "external". Flag has no effect currently
when the cloud-provider is not set to external. The expectation is
that the cloud provider and external plugin flags would go away once
all cloud providers are on stage 2 cloud-controller-manager solutions.
Managing the control loops more directly based on start up flags.
Addressing issue brought up by @wlan0
Switched to using the main node controller in CCM.
Changes to enable full NodeController to start in CCM.
Fix related tests.
Unifying some common code between KCM and CCM.
Fix related tests and comments.
Folded in feedback from @jhorwit2 and @wlan0
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix failure to load volume plugins for #52048
Currently we have two plugin managers.
However one of them limits the cloud plugins it loads.
This means that if cloud provider is set to external the plugins will
not be loaded in *that* plugin manager. However they will be loaded in
the other instance of the plugin manager. So it does not actually save
us anything. It does hamper the efforts to actually get stage 1
separation working.
**What this PR does / why we need it**: It allows the plugins be found for the cloud providers working on stage 1 separation.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#52048
**Special notes for your reviewer**:
**Release note**:
```release-note NONE
```
running (renamed to GetAllCurrentZones). Added E2E test to confirm this
behavior.
Added node informer to cloud-provider controller to keep track of zones
with k8s nodes in them.
Automatic merge from submit-queue (batch tested with PRs 55331, 55272, 55228, 49763, 55242). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
use versiond group clients from client-go
**What this PR does / why we need it**:
Some **Deprecated** group clients are still used, replace them with versioned group clients.
**Which issue this PR fixes**: fixes#49760
**Special notes for your reviewer**:
/assign @caesarxuchao
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Tolerate partial discovery in garbage collector
Allow the garbage collector to tolerate partial discovery failures. On a
partial failure, use whatever was discovered, log the failures, and
allow the resync logic to try again later.
Fixes#55022.
```release-note
API discovery failures no longer crash the kube controller manager via the garbage collector.
```
/cc @caesarxuchao
Allow the garbage collector to tolerate partial discovery failures. On a
partial failure, use whatever was discovered, log the failures, and
allow the resync logic to try again later.
Fixes#55022.
Automatic merge from submit-queue (batch tested with PRs 53592, 52562, 55175, 55213). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Check RegisterMetricAndTrackRateLimiterUsage error when starting BootstrapSigner & TokenCleaner controllers
**What this PR does / why we need it**:
Prevent `BootstrapSigner` and `TokenCleaner` controllers to start if `metrics.RegisterMetricAndTrackRateLimiterUsage` returns an error.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: complements #53571
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
ClusterCIDR and ServiceCIDR are settings that are only used if at least
AllocateNodeCIDRs is set. The route controller requires in addition to
it for ConfigureCloudRoutes to be true as well. Since
AllocateNodeCIDRs is by default false, if guard the parsing of these
settings in order to not unnecessarily spam logs. Amend the
documentation of kube-controller-manager for the 2 settings to point
out the requirement of AllocateNodeCIDRs to be true as well
1) Modify rbdPlugin to implement volume.AttachableVolumePlugin
interface.
2) Add rbdAttacher/rbdDetacher structs to implement
volume.Attacher/Detacher interfaces.
3) Add mount.SafeFormatAndMount/mount.Exec fields to rbdPlugin, and
setup them in rbdPlugin.Init for later uses.
Attacher/Mounter/Unmounter/Detacher reference rbdPlugin to use mounter
and exec. This simplifies code.
4) Add testcase struct to abstract RBD Plugin test case, etc.
5) Add newRBD constructor to unify rbd struct initialization.
This updates the HPA controller to use the polymorphic scale client from
client-go. This should enable HPAs to work with arbitrary scalable
resources, instead of just those in the extensions API group (meaning we
can deprecate the copy of ReplicationController in extensions/v1beta1).
It also means that the HPA controller now pays attention to the
APIVersion field in `scaleTargetRef` (more specifically, the group part
of it).
Note that currently, discovery information on which resources are
available where is only fetched once (the first time that it's
requested). In the future, we may want a refreshing discovery REST
mapper.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Make HPA tolerance a flag
**What this PR does / why we need it**:
Make HPA tolerance configurable as a flag. This change allows us to use
different tolerance values in production/testing.
**Which issue this PR fixes**:
Fixes#18155
**Release note:**
```release-note
Control HPA tolerance through the `horizontal-pod-autoscaler-tolerance` flag.
```
Signed-off-by: mattjmcnaughton <mattjmcnaughton@gmail.com>
Using assertions for unit tests:
1. cmd/kube-controller-manager/app/controller_manager_test.go
2. pkg/controller/bootstrap/jws_test.go
3. pkg/controller/cloud/node_controller_test.go
4. pkg/controller/controller_utils_test.go
Automatic merge from submit-queue (batch tested with PRs 51034, 53239). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix conditional for warning while starting KCM without secret file
@liggitt @spiffxp @lavalamp
Fixes#53291
A small bug was introduced in this PR - https://github.com/kubernetes/kubernetes/pull/50288, where the warning message is printed when the file is specified, and it is not printed if it is left blank - exactly the opposite of the intended behavior.
This fixes that.
```
release-note-none
```
Fix#18155
Make HPA tolerance configurable as a flag. This change allows us to use
different tolerance values in production/testing.
Signed-off-by: mattjmcnaughton <mattjmcnaughton@gmail.com>
Automatic merge from submit-queue (batch tested with PRs 52751, 52898, 52633, 52611, 52609). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
Add default value for RouteReconciliationPeriod in cloud controller manager
**What this PR does / why we need it**:
Add default sync period value config for RouteReconciliationPeriod. For now the default value is 0, which means zero cooldown time.
The value is taken from kube-controller-manager:
b2b079b95a/cmd/kube-controller-manager/app/options/options.go (L73)
**Which issue this PR fixes**
**Special notes for your reviewer**:
**Release note**:
Automatic merge from submit-queue (batch tested with PRs 52445, 52380, 52516, 52531, 52538). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
remove repeated import'k8s.io/client-go/kubernetes' in controllermana…
**What this PR does / why we need it**:
There are duplicate importing "k8s.io/client-go/kubernetes", we just need 'clientset'.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 50378, 51463, 50006, 51962, 51673). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>..
adding kube-controller-manager starting option tests
**What this PR does / why we need it**:
The unit test for kube-controller-manager is missing in `cmd/kube-controller-manager/app/options`. I have added a unit test for checking kube-controller-manager starting options, which is similar to the unit test in `cmd/kube-apiserver/app/options/options_test.go`. This PR https://github.com/kubernetes/kubernetes/pull/49092 can be seen as a reference.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
`NONE`
Currently we have two plugin managers.
However one of them limits the cloud plugins it loads.
This means that if cloud provider is set to external the plugins will
not be loaded in *that* plugin manager. However they will be loaded in
the other instance of the plugin manager. So it does not actually save
us anything. It does hamper the efforts to actually get stage 1
separation working.