Commit Graph

92 Commits (35bd2a5e88a9e9acc00099eb05dfa8b2ff531b93)

Author SHA1 Message Date
Weibin Lin 842bd1e1ec update deployment, daemonset, replicaset, statefulset to apps/v1 2018-12-19 10:46:45 -05:00
andrewsykim 5329f09663 consolidate node deletion logic between node lifecycle and cloud node controller 2018-12-03 13:33:53 -05:00
Davanum Srinivas 954996e231
Move from glog to klog
- Move from the old github.com/golang/glog to k8s.io/klog
- klog as explicit InitFlags() so we add them as necessary
- we update the other repositories that we vendor that made a similar
change from glog to klog
  * github.com/kubernetes/repo-infra
  * k8s.io/gengo/
  * k8s.io/kube-openapi/
  * github.com/google/cadvisor
- Entirely remove all references to glog
- Fix some tests by explicit InitFlags in their init() methods

Change-Id: I92db545ff36fcec83afe98f550c9e630098b3135
2018-11-10 07:50:31 -05:00
Zhen Wang e35d808aa2 NodeLifecycleController treats node lease renewal as a heartbeat signal 2018-10-11 16:07:15 -07:00
k8s-ci-robot 0805860dba
Merge pull request #67870 from yue9944882/refactor/externalize-resource-quota-admission-controller
Externalize resource quota admission controller & controller reconciliation
2018-09-25 02:41:40 -07:00
Cheng Xing 8555408f42 Removing CRD installation from attach detach controller 2018-09-18 14:25:15 -07:00
Janet Kuo cbdc9b671f Make number of workers configurable 2018-09-04 14:21:14 -07:00
Janet Kuo 5186807587 Add TTL GC controller 2018-09-04 13:11:18 -07:00
Lucas Käldström 8aaa527d35
Fixup cmd/*controller-manager code after struct changes. Co-authored by @stewart-yu 2018-09-02 14:10:46 +03:00
saad-ali fdeb895d25 Automatically install CRDs during controller init 2018-08-31 12:25:59 -07:00
Jan Safranek 7d673cb8f0 Pass new CSI API Client and informer to Volume Plugins 2018-08-31 12:25:59 -07:00
yue9944882 a4f33a6a9f align imports for cmd 2018-08-27 21:50:15 +08:00
David Eads fb7d137ea2 add debug handler capability for individual controllers 2018-07-26 13:24:36 -04:00
lichuqiang bccc8fe979 Provision interface change 2018-06-05 16:35:16 +08:00
Kubernetes Submit Queue 5a54555f59
Merge pull request #63049 from andrewsykim/kcm-nodeipam
Automatic merge from submit-queue (batch tested with PRs 63049, 59731). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

re-enable nodeipam in kube-controller-manager

**What this PR does / why we need it**:
Re-enables nodeipam controller for external clouds. Also does a small refactor so that we don't need to pass in `allocateNodeCidr` into the controller. 

In v1.10 we made a change (9187b343e1 (diff-f11913dc67d80d36b3d06a93f61c49cf) in https://github.com/kubernetes/kubernetes/pull/57492) where nodeipam would be disabled for any cluster that sets `--cloud-provider=external`. The original intention behind this was that the nodeipam controller is cloud specific for some clouds (only GCE at the moment) so it should be moved to the CCM (cloud controller manager). After some discussions with wg-cloud-provider it makes sense to re-enable nodeipam controller in KCM and have GCE CCM enable its own cloud-specific IPAM controller as part of [Initialize()](https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/cloud.go#L33-L35). This would allow for GCE to run nodeipam in both KCM (by setting --cloud-provider=gce and --allocate-node-cidr) and in the CCM (once implemented in `Initialize()`) without disabling nodeipam in the KCM for all external clouds and avoids having to implement nodeipam in CCM. 

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes # 

**Special notes for your reviewer**:


**Release note**:
```release-note
Re-enable nodeipam controller for external clouds. 
```
2018-05-11 11:07:12 -07:00
Shyam Jeedigunta 302af9bfe4 Remove 20x factor in garbage-collector qps 2018-05-10 12:21:57 +02:00
David Eads cf4f7aab65 update garbage collection to use the new dynamic client 2018-05-07 09:01:39 -04:00
hzxuzhonghu 7f93d11f9e Add RESTMapper to ControllerContext and make it generic for controllers 2018-04-28 09:58:43 +08:00
Kubernetes Submit Queue 95841fe5ea
Merge pull request #63251 from liggitt/namespace-controller-qps
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Bump QPS on namespace controller

https://github.com/kubernetes/kubernetes/pull/62913 switched from using a client pool, where each groupVersionResource got its own rest client, to a single client.

This increases the QPS to account for increased requests using a single rest client rate limiter.

Fixes #63240

```release-note
NONE
```
2018-04-27 10:06:56 -07:00
Jordan Liggitt 1bddcdcf44
Bump QPS on namespace controller
https://github.com/kubernetes/kubernetes/pull/62913 switched from using a client pool, where each groupVersionResource got its own rest client, to a single client.

This increases the QPS to account for increased requests using a single rest client rate limiter.
2018-04-27 10:11:14 -04:00
David Eads e2fc5cf259 remove versioning interface 2018-04-27 07:56:42 -04:00
David Eads 3632037e60 add easy to use dynamic client 2018-04-25 08:55:26 -04:00
andrewsykim 0a164760dc renable nodeipam in kube-controller-manager 2018-04-23 22:28:37 -04:00
Pavel Pospisil d3ddf7eb8b Always Start pvc-protection-controller and pv-protection-controller
After K8s 1.10 is upgraded to K8s 1.11 finalizer [kubernetes.io/pvc-protection] is added to PVCs
because StorageObjectInUseProtection feature will be GA in K8s 1.11.
However, when K8s 1.11 is downgraded to K8s 1.10 and the StorageObjectInUseProtection feature is disabled
the finalizers remain in the PVCs and as pvc-protection-controller is not started in K8s 1.10 finalizers
are not removed automatically from deleted PVCs and that's why deleted PVC are not removed from the system
but remain in Terminating phase.
The same applies to pv-protection-controller and [kubernetes.io/pvc-protection] finalizer in PVs.

That's why pvc-protection-controller is always started because the pvc-protection-controller removes finalizers
from PVCs automatically when a PVC is not in active use by a pod.
Also the pv-protection-controller is always started to remove finalizers from PVs automatically when a PV is not
Bound to a PVC.

Related issue: https://github.com/kubernetes/kubernetes/issues/60764
2018-04-20 19:54:50 +02:00
stewart-yu ec6399be53 split up the component config into smaller config 2018-04-13 08:40:54 +08:00
NickrenREN dad0fa07b7 rename StorageProtection to StorageObjectInUseProtection 2018-02-21 10:48:56 +08:00
stewart-yu 0cbe0a6034 controller-manager: switch to config/option struct pattern 2018-02-13 11:16:17 +01:00
Kubernetes Submit Queue 5cecc6ec68
Merge pull request #59350 from jsafrane/recycler-wait
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Do not recycle volumes that are used by pods

**What this PR does / why we need it**:
Recycler should wait until all pods that use a volume are finished.

Consider this scenario:

1. User creates a PVC that's bound to a NFS PV.
2. User creates a pod that uses the PVC
3. User deletes the PVC.

Now the PV gets `Released` (the PVC does not exists) and recycled, however the PV is still mounted to a running pod. PVC protection won't help us, because it puts finalizers on PVC that is under user's control and user can remove it.

This PR checks that there is no pod that uses a PV before it recycles it.

**Release note**:

```release-note
NONE
```

/sig storage
2018-02-07 10:01:32 -08:00
Jan Safranek c96c0495f4 Pass pod informer to PV controller 2018-02-05 15:40:25 +01:00
Clayton Coleman d07a608607 Promote v1alpha1 meta to v1beta1
No code changes, just renames
2018-02-02 14:00:45 -05:00
NickrenREN 3fee293607 Add PV protection controller 2018-01-31 20:18:54 +08:00
NickrenREN 2a2f88b939 Rename PVCProtection feature gate so that PV protection can share the feature gate with PVC protection 2018-01-31 20:02:01 +08:00
Walter Fender 9187b343e1 Split the NodeController into lifecycle and ipam pieces.
Prepatory work fpr removing cloud provider dependency from node
controller running in Kube Controller Manager. Splitting the node
controller into its two major pieces life-cycle and CIDR/IP
management. Both pieces currently need the the cloud system to do their work.
Removing lifecycles dependency on cloud will be fixed ina followup PR.

Moved node scheduler code to live with node lifecycle controller.
Got the IPAM/Lifecycle split completed. Still need to rename pieces.
Made changes to the utils and tests so they would be in the appropriate
package.
Moved the node based ipam code to nodeipam.
Made the relevant tests pass.
Moved common node controller util code to nodeutil.
Removed unneeded pod informer sync from node ipam controller.
Fixed linter issues.
Factored in  feedback from @gmarek.
Factored in feedback from @mtaufen.
Undoing unneeded change.
2018-01-04 12:48:08 -08:00
jsafrane 4ad4ee3153 Added PVC Protection Controller
This controller removes protection finalizer from PVCs that are being
deleted and are not referenced by any pod.
2017-11-23 11:46:34 +01:00
Kubernetes Submit Queue 42d5dc709e
Merge pull request #55259 from ironcladlou/gc-partial-discovery
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Tolerate partial discovery in garbage collector

Allow the garbage collector to tolerate partial discovery failures. On a
partial failure, use whatever was discovered, log the failures, and
allow the resync logic to try again later.

Fixes #55022.

```release-note
API discovery failures no longer crash the kube controller manager via the garbage collector.
```

/cc @caesarxuchao
2017-11-07 18:53:51 -08:00
Dan Mace c3dd82c30c Tolerate partial discovery in garbage collector
Allow the garbage collector to tolerate partial discovery failures. On a
partial failure, use whatever was discovered, log the failures, and
allow the resync logic to try again later.

Fixes #55022.
2017-11-07 16:54:49 -05:00
Kubernetes Submit Queue 576c9118a6
Merge pull request #53592 from frodenas/bootstrap-controller
Automatic merge from submit-queue (batch tested with PRs 53592, 52562, 55175, 55213). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Check RegisterMetricAndTrackRateLimiterUsage error when starting BootstrapSigner & TokenCleaner controllers

**What this PR does / why we need it**:
Prevent `BootstrapSigner` and `TokenCleaner` controllers to start if `metrics.RegisterMetricAndTrackRateLimiterUsage` returns an error.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: complements #53571 

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-11-07 11:21:15 -08:00
Alexandros Kosiaris 4dddb8c6b3 Only parse ClusterCIDR, ServiceCIDR if AllocateNodeCIDRs
ClusterCIDR and ServiceCIDR are settings that are only used if at least
AllocateNodeCIDRs is set. The route controller requires in addition to
it for ConfigureCloudRoutes to be true as well. Since
AllocateNodeCIDRs is by default false, if guard the parsing of these
settings in order to not unnecessarily spam logs. Amend the
documentation of kube-controller-manager for the 2 settings to point
out the requirement of AllocateNodeCIDRs to be true as well
2017-11-02 19:25:03 +02:00
Ferran Rodenas d67898b875 Check RegisterMetricAndTrackRateLimiterUsage error when starting controllers
Signed-off-by: Ferran Rodenas <rodenasf@vmware.com>
2017-11-01 12:46:07 +01:00
Derek Carr 7f88e91892 Update quota controller to monitor all types 2017-10-27 11:07:53 -04:00
Kevin 4c8539cece use core client with explicit version globally 2017-10-27 15:48:32 +08:00
Dr. Stefan Schimanski 7773a30f67 pkg/api/legacyscheme: fixup imports 2017-10-18 17:23:55 +02:00
Hemant Kumar cd2a68473a Implement controller for resizing volumes 2017-09-04 09:02:34 +02:00
Kubernetes Submit Queue b832992fc6 Merge pull request #49257 from k82cn/k8s_42001
Automatic merge from submit-queue (batch tested with PRs 51574, 51534, 49257, 44680, 48836)

Task 1: Tainted node by condition.

**What this PR does / why we need it**:
Tainted node by condition for MemoryPressure, OutOfDisk and so on.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: part of #42001 

**Release note**:
```release-note
Tainted nodes by conditions as following:
  * 'node.kubernetes.io/network-unavailable=:NoSchedule' if NetworkUnavailable is true
  * 'node.kubernetes.io/disk-pressure=:NoSchedule' if DiskPressure is true
  * 'node.kubernetes.io/memory-pressure=:NoSchedule' if MemoryPressure is true
  * 'node.kubernetes.io/out-of-disk=:NoSchedule' if OutOfDisk is true
```
2017-08-31 23:13:20 -07:00
David Eads 253b047d89 update GC controller to wait until controllers have been initialized once 2017-08-31 09:01:38 -04:00
Cheng Xing 396c3c7c6f Adding dynamic Flexvolume plugin discovery capability, using filesystem watch. 2017-08-25 11:42:32 -07:00
Klaus Ma 55fa10c182 Tainted node by condition. 2017-08-11 09:55:29 +08:00
Kubernetes Submit Queue 9bbcd4af60 Merge pull request #50335 from ironcladlou/gc-discovery-optimization
Automatic merge from submit-queue (batch tested with PRs 49642, 50335, 50390, 49283, 46582)

Improve GC discovery sync performance

Improve GC discovery sync performance by only syncing when discovered
resource diffs are detected. Before, the GC worker pool was shut down
and monitors resynced unconditionally every sync period, leading to
significant processing delays causing test flakes where otherwise
reasonable GC timeouts were being exceeded.

Related to https://github.com/kubernetes/kubernetes/issues/49966.

/cc @kubernetes/sig-api-machinery-bugs

```release-note
NONE
```
2017-08-10 00:53:19 -07:00
Dan Mace 3d6d57a18f Improve GC discovery sync performance
Improve GC discovery sync performance by only syncing when discovered
resource diffs are detected. Before, the GC worker pool was shut down
and monitors resynced unconditionally every sync period, leading to
significant processing delays causing test flakes where otherwise
reasonable GC timeouts were being exceeded.

Related to https://github.com/kubernetes/kubernetes/issues/49966.
2017-08-09 09:16:05 -04:00
Bowei Du 27854fa0d8 Break up node controller into packages
This change does NO actual code changes other than moving constituent
parts into packages.
2017-08-08 15:33:56 -07:00