Individual implementations are not yet being moved.
Fixed all dependencies which call the interface.
Fixed golint exceptions to reflect the move.
Added project info as per @dims and
https://github.com/kubernetes/kubernetes-template-project.
Added dims to the security contacts.
Fixed minor issues.
Added missing template files.
Copied ControllerClientBuilder interface to cp.
This allows us to break the only dependency on K8s/K8s.
Added TODO to ControllerClientBuilder.
Fixed GoDeps.
Factored in feedback from JustinSB.
Since we are going to treat both node status and node lease as node
heartbeat/health signals, this PR makes the renmae changes, so that the
follow-up PRs are easier to review.
HPA will treat initial size of autoscalee to avoid hastily overriding
recomendations made by HPA (if HPA set size and then was restarted) or by user
(initial size should be treated as human-generated recommendation).
- Use VolumeAvaiable instead of empty or pending phase in tests
- Add a test case to verify findMatchingVolume will not choose
non-avaiable PVs if it's not pre-bind
- Add a test case to verify syncClaim will not choose non-avaibalbe PVs
if it's not pre-bind
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Added unschedulable and network-unavailable toleration.
Signed-off-by: Da K. Ma <klaus1982.cn@gmail.com>
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
part of #61312
fixes: https://github.com/kubernetes/kubernetes/issues/67606
**Release note**:
```release-note
If `TaintNodesByCondition` is enabled, add `node.kubernetes.io/unschedulable` and
`node.kubernetes.io/network-unavailable` automatically to DaemonSet pods.
```
Automatic merge from submit-queue (batch tested with PRs 65250, 68241). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Use informer cache instead of active pod gets in HPA controller.
**What this PR does / why we need it**:
Use informer cache instead of active pod gets in HPA controller.
**Which issue(s) this PR fixes**:
Fixes#68217
**Release note**:
```release-note
kube-controller-manager: use informer cache instead of active pod gets in HPA controller
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
CSI Node info registration in kubelet
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#67683
**Special notes for your reviewer**:
Feature issue: https://github.com/kubernetes/features/issues/557
Design doc: https://github.com/kubernetes/community/pull/2034
Missing pieces:
* CSI client retry and exponential backoff logic.
* CSINodeInfo object validation
* e2e test with all the CSI machinery.
An RBAC rule is also added to support external-provisioner topology updates.
**Release note**:
```release-note
Registers volume topology information reported by a node-level Container Storage Interface (CSI) driver. This enables Kubernetes support of CSI topology mechanisms.
```
Automatic merge from submit-queue (batch tested with PRs 67950, 68195). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Consolidate componentconfig code standards
**What this PR does / why we need it**:
This PR fixes a bunch of very small misalignments in ComponentConfig packages:
- Add sane comments to all functions/variables in componentconfig `register.go` files
- Make the `register.go` files of componentconfig pkgs follow the same pattern and not differ from each other like they do today.
- Register the `openapi-gen` tag in all `doc.go` files where the pkg contains _external_ types.
- Add the `groupName` tag where missing
- Fix cases where `addKnownTypes` was registered twice in the `SchemeBuilder`
- Add `Readme` and `OWNERS` files to `Godeps` directories if missing.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
/assign @sttts @thockin
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Let the service controller retry when presistUpdate returns a conflict error
**What this PR does / why we need it**:
If a load balancer is changed while provisioning, it will fall into an error state and will not self-recover.
This PR picks up the conflict error and let serviceController retry in order to get the load balancer out of error state.
**Special notes for your reviewer**:
/assign @MrHohn @rramkumar1
**Release note**:
```release-note
Let service controller retry creating load balancer when persistUpdate failed due to conflict.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Implement semantic comparison of VolumeNodeAffinity for unit tests
**What this PR does / why we need it**:
Implements a semantic comparison function for VolumeNodeAffinity that is not sensitive to ordering of various members. Previous reflect.DeepEqual was sensitive to ordering causing it to be flaky.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes # https://github.com/kubernetes/kubernetes/issues/67852
**Special notes for your reviewer**:
We want to able to successfully match `VolumeNodeAffinity{Required:&NodeSelector{NodeSelectorTerms:[{[{a In [1]} {b In [2 3]}] []}],},}` and `VolumeNodeAffinity{Required:&NodeSelector{NodeSelectorTerms:[{[{b In [3 2]} {a In [1]}] []}],},}` without being sensitive to the ordering of requirements with key `a` and `b` or the order of values with key `b`. This fix enables such semantic comparison of VolumeNodeAffinity
We can move `volumeNodeAffinitiesEqual` to volume/utils post code freeze
**Release note**:
```release-note
NONE
```
/sig storage
cc @msau42
Automatic merge from submit-queue (batch tested with PRs 67709, 67556). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Fix volume scheduling issue with pod affinity and anti-affinity
**What this PR does / why we need it**:
The previous design of the volume scheduler had volume assume + bind done before pod assume + bind. This causes issues when trying to evaluate future pods with pod affinity/anti-affinity because the pod has not been assumed while the volumes have been decided.
This PR changes the design so that volume and pod are assumed first, followed by volume and pod binding. Volume binding waits (asynchronously) for the operations to complete or error. This eliminates the subsequent passes through the scheduler to wait for volume binding to complete (although pod events or resyncs may still cause the pod to run through scheduling while binding is still in progress). This design also aligns better with the scheduler framework design, so will make it easier to migrate in the future.
Many changes had to be made in the volume scheduler to handle this new design, mostly around:
* How we cache pending binding operations. Now, any delayed binding PVC that is not fully bound must have a cached binding operation. This also means bind API updates may be repeated.
* Waiting for the bind operation to fully complete, and detecting failure conditions to abort the bind and retry scheduling.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#65131
**Special notes for your reviewer**:
**Release note**:
```release-note
Fixes issue where pod scheduling may fail when using local PVs and pod affinity and anti-affinity without the default StatefulSet OrderedReady pod management policy
```
Automatic merge from submit-queue (batch tested with PRs 66840, 68159). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
CSI Cluster Registry and Node Info CRDs Improvements
**What this PR does / why we need it**:
https://github.com/kubernetes/kubernetes/pull/67803 merged before I could address @lavalamp's feedback. This PR addresses his feedback
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Follow up on PR https://github.com/kubernetes/kubernetes/pull/67803
**Special notes for your reviewer**:
**Release note**:
```release-note
```
/assign @lavalamp
/assign @thockin
CC @jsafrane @vladimirvivien @verult @gnufied @childsb
* FindPodVolumes
* Prebound PVCs are treated like unbound immediate PVCs and will error
* Always check for fully bound PVCs and cache bindings for not fully
bound PVCs
* BindPodVolumes
* Retry API updates for not fully bound PVCs even if the assume cache
already marked it
* Wait for PVCs to be fully bound after making the API updates
* Error when detecting binding/provisioning failure conditions
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Replace scale down window
**What this PR does / why we need it**:
Replace scale down forbidden window with scale down stabilization window.
This allows scale down based on more than one sample, to avoid rapidly changing size up and down for controllers with fluctuating load.
A bit more in https://docs.google.com/document/d/1IdG3sqgCEaRV3urPLA29IDudCufD89RYCohfBPNeWIM
This PR is copy of #67771 with resolved comments.
**Release note**:
```release-note
Replace scale down forbidden window with scale down stabilization window. Rather than waiting a fixed period of time between scale downs HPA now scales down to the highest recommendation it during the scale down stabilization window.
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Add gnufied as approver for attach/detach controller
Hopefully has reviewed and made enough fixes in this
area to understand the code thoroughly.
```release-note
None
```
/assign @saad-ali @jsafrane
Automatic merge from submit-queue (batch tested with PRs 64283, 67910, 67803, 68100). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
CSI Cluster Registry and Node Info CRDs
**What this PR does / why we need it**:
Introduces the new `CSIDriver` and `CSINodeInfo` API Object as proposed in https://github.com/kubernetes/community/pull/2514 and https://github.com/kubernetes/community/pull/2034
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes https://github.com/kubernetes/features/issues/594
**Special notes for your reviewer**:
Per the discussion in https://groups.google.com/d/msg/kubernetes-sig-storage-wg-csi/x5CchIP9qiI/D_TyOrn2CwAJ the API is being added to the staging directory of the `kubernetes/kubernetes` repo because the consumers will be attach/detach controller and possibly kubelet, but it will be installed as a CRD (because we want to move in the direction where the API server is Kubernetes agnostic, and all Kubernetes specific types are installed).
**Release note**:
```release-note
Introduce CSI Cluster Registration mechanism to ease CSI plugin discovery and allow CSI drivers to customize Kubernetes' interaction with them.
```
CC @jsafrane
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Change CPU sample sanitization in HPA.
**What this PR does / why we need it**:
Change CPU sample sanitization in HPA.
Ignore samples if:
- Pod is beeing initalized - 5 minutes from start defined by flag
- pod is unready
- pod is ready but full window of metric hasn't been colected since
transition
- Pod is initialized - 5 minutes from start defined by flag:
- Pod has never been ready after initial readiness period.
**Release notes:**
```release-note
Improve CPU sample sanitization in HPA by taking metric's freshness into account.
```
Ignore samples if:
- Pod is beeing initalized - 5 minutes from start defined by flag
- pod is unready
- pod is ready but full window of metric hasn't been colected since
transition
- Pod is initialized - 5 minutes from start defined by flag:
- Pod has never been ready after initial readiness period.
Automatic merge from submit-queue (batch tested with PRs 67745, 67432, 67569, 67825, 67943). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Fix VMWare VM freezing bug by reverting #51066
**What this PR does / why we need it**: kube-controller-manager, VSphere specific: When the controller tries to attach a Volume to Node A that is already attached to Node B, Node A freezes until the volume is attached. Kubernetes continues to try to attach the volume as it thinks that it's 'multi-attachable' when it's not. #51066 is the culprit.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes https://github.com/vmware/kubernetes/issues/500 / https://github.com/vmware/kubernetes/issues/502 (same issue)
**Special notes for your reviewer**:
- Repro:
Vsphere installation, any k8s version from 1.8 and above, pod with attached PV/PVC/VMDK:
1. cordon the node which the pod is in
2. `kubectl delete po/[pod] --force --grace-period=0`
3. the pod is immediately rescheduled to a new node. Grab the new node from a `kubectl describe [pod]` and attempt to Ping it or SSH into it.
4. you can see that pings/ssh fail to reach the new node. `kubectl get node` shows it as 'NotReady'. New node is frozen until the volume is attached - usually 1 minute freeze for 1 volume in a low-load cluster, and many minutes more with higher loads and more volumes involved.
- Patch verification:
Tested a custom patched 1.9.10 kube-controller-manager with #51066 reverted and the above bug is resolved - can't repro it anymore. New node doesn't freeze at all, and attaching happens quite quickly, in a few seconds.
**Release note**:
```
Fix VSphere VM Freezing bug by reverting #51066
```
Automatic merge from submit-queue (batch tested with PRs 67067, 67947). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Do not count soft-deleted pods for scaling purposes in HPA controller
**What this PR does / why we need it**:
The metrics of "soft-deleted" pods in general to be deleted should probably not matter for scaling purposes, since they'll be gone "soon", whether they're nodelost or just normally delete.
As long as soft-deleted pods still exist, they prevent normal scale up.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes https://github.com/kubernetes/kubernetes/issues/62845
**Special notes for your reviewer**:
**Release note**:
```release-note
Stop counting soft-deleted pods for scaling purposes in HPA controller to avoid soft-deleted pods incorrectly affecting scale up replica count calculation.
```
Automatic merge from submit-queue (batch tested with PRs 67938, 66719, 67883). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove incorrect glog error from Horizontal Pod Autoscaler Controller.
**What this PR does / why we need it**:
Pro removes incorrect glog error from Horizontal Pod Autoscaler Controller.
**Release note:**
```release-note
none
```
Automatic merge from submit-queue (batch tested with PRs 67694, 64973, 67902). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
SCTP support implementation for Kubernetes
**What this PR does / why we need it**: This PR adds SCTP support to Kubernetes, including Service, Endpoint, and NetworkPolicy.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#44485
**Special notes for your reviewer**:
**Release note**:
```release-note
SCTP is now supported as additional protocol (alpha) alongside TCP and UDP in Pod, Service, Endpoint, and NetworkPolicy.
```
Automatic merge from submit-queue (batch tested with PRs 64597, 67854, 67734, 67917, 67688). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
fix an issue that scheduling doesn't respect NodeLost status of a node
**What this PR does / why we need it**:
- if Node is in UnknowStatus, apply unreachable taint with NoSchedule effect
- some internal data structure refactoring
- update unit test
**Which issue(s) this PR fixes**:
Fixes#67733, and very likely #67536
**Special notes for your reviewer**:
See detailed reproducing steps in #67733.
**Release note**:
```release-note
Apply unreachable taint to a node when it lost network connection.
```
The requested Service Protocol is checked against the supported protocols of GCE Internal LB. The supported protocols are TCP and UDP.
SCTP is not supported by OpenStack LBaaS. If SCTP is requested in a Service with type=LoadBalancer, the request is rejected. Comment style is also corrected.
SCTP is not allowed for LoadBalancer Service and for HostPort. Kube-proxy can be configured not to start listening on the host port for SCTP: see the new SCTPUserSpaceNode parameter
changed the vendor github.com/nokia/sctp to github.com/ishidawataru/sctp. I.e. from now on we use the upstream version.
netexec.go compilation fixed. Various test cases fixed
SCTP related conformance tests removed. Netexec's pod definition and Dockerfile are updated to expose the new SCTP port(8082)
SCTP related e2e test cases are removed as the e2e test systems do not support SCTP
sctp related firewall config is removed from cluster/gce/util.sh. Variable name sctp_addr is corrected to sctpAddr in pkg/proxy/ipvs/proxier.go
cluster/gce/util.sh is copied from master
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
controller expectations for deletion can be met by 404
A controller asks pod control to delete a pod because it wants the pod to be gone. It doesn't really care if the imperative delete action itself succeeds. When the pod is already gone (404), then the desire of the controller is met.
Since the pods themselves are cache driven, you can hit this condition more than you may like. See https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/replicaset/replica_set.go#L582 as an example.
@kubernetes/sig-apps-bugs
/assign @janetkuo @tnozicka
```release-note
latent controller caches no longer cause repeating deletion messages for deleted pods
```
Automatic merge from submit-queue (batch tested with PRs 66916, 67252, 67794, 67619, 67328). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Fix HPA sample sanitization
**What this PR does / why we need it**: @mwielgus pointed out a case when HPA fails as a result of my changes to HPA algorithm:
- Have pods that use a lot of CPU during initilization, become ready right after they initialize,
- Trigger a scale up,
- When new pods become ready will will count their usage (even though it's not related to any work that needs doing),
- This triggers another scale up, even though existing pods can handle work, no problem.
The fix is:
- Use all samples for non-cpu metrics.
- Only use CPU samples if:
- Pod is ready and was started more than 2 minutes ago, or
- Pod is unready and last readiness change happened more than 10s after it was started.
Reasoning behind this in: https://docs.google.com/document/d/1UdtYedhmCxjaJIQi6hwJMY0eHQQKxlVD8lSHZC1BPOA/edit
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
**Special notes for your reviewer**:
**Release note**:
```release-note
Replace scale up forbidden window with disregarding CPU samples collected when pod was initializing.
```
Duration of initialization taint on CPU and window of initial readiness
setting controlled by flags.
Adding API violation exceptions following example of e50340ee23
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Remove incorrect comment
**What this PR does / why we need it**:
These code did not Update the revisions labels, the comment is incorrect
```
// Update the revisions name and labels
clone.Name = ControllerRevisionName(parent.GetName(), hash)
ns := parent.GetNamespace()
created, err := rh.client.AppsV1().ControllerRevisions(ns).Create(clone)
```
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
NONE
**Release note**:
```
NONE
```
/kind cleanup
/release-note-none
/sig apps
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add Labels to various OWNERS files
**What this PR does / why we need it**:
Will reduce the burden of manually adding labels. Information pulled
from:
https://github.com/kubernetes/community/blob/master/sigs.yaml
Change-Id: I17e661e37719f0bccf63e41347b628269cef7c8b
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 67661, 67497, 66523, 67622, 67632). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Allow headless svc without ports to have endpoints
As cited in
https://github.com/kubernetes/dns/issues/174 - this is documented to
work, and I don't see why it shouldn't work. We allowed the definition
of headless services without ports, but apparently nobody tested it very
well.
Manually tested clusterIP services with no ports - validation error.
Manually tested services with negative ports - validation error.
New tests failed, output inspected and verified. Now pass.
xref https://github.com/kubernetes/dns/issues/174
**Release note**:
```release-note
Headless Services with no ports defined will now create Endpoints correctly, and appear in DNS.
```
As cited in
https://github.com/kubernetes/dns/issues/174 - this is documented to
work, and I don't see why it shouldn't work. We allowed the definition
of headless services without ports, but apparently nobody tested it very
well.
Manually tested clusterIP services with no ports - validation error.
Manually tested services with negative ports - validation error.
New tests failed, output inspected and verified. Now pass.
After my previous changes HPA wasn't behaving correctly in the following
situation:
- Pods use a lot of CPU during initilization, become ready right after they initialize,
- Scale up triggers,
- When new pods become ready HPA counts their usage (even though it's not related to any work that needs doing),
- Another scale up, even though existing pods can handle work, no problem.