Automatic merge from submit-queue
Refactor cert utils into one pkg, add funcs from bootkube for kubeadm to use
**What this PR does / why we need it**:
We have ended-up with rather incomplete and fragmented collection of utils for handling certificates. It may be worse to consider using `cfssl` for doing all of these things, but for now there is some functionality that we need in `kubeadm` that we can borrow from bootkube. It makes sense to move the utils from bookube into core, as discussed in #31221.
**Special notes for your reviewer**: I've taken the opportunity to review names of existing funcs and tried to make some improvements in that area (with help from @peterbourgon).
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
move registry packages for all API groups
This continues the pattern of `registry/<group>/resource` for our backing storage. This entire pull is nothing but moves. I'll reswizzle the actual storage next, but these are cargo-culted everywhere, so I want to lay this down early.
@sttts @ncdc
Automatic merge from submit-queue
simplify RC and SVC listers
Make the RC and SVC listers use the common list functions that more closely match client APIs, are consistent with other listers, and avoid unnecessary copies.
Spawn pet set with single replica and simple pod. They will have
conflicting hostPort definitions, and spawned on the same node.
As the result pet set pod, it will be created after simple pod, will be
in Failed state. Pet set controller will try to re-create it. After
verifying that pet set pod failed and was recreated atleast once, we will
remove pod with conflicting hostPort and wait until pet set pod will be in
running state.
Change-Id: I5903f5881f8606c696bd390df58b06ece33be88a
Automatic merge from submit-queue
controller: enhance timeout error message for Recreate deployments
Makes the error message from https://github.com/kubernetes/kubernetes/issues/29197 more obvious
@kubernetes/deployment
Prior to this, we would approve eviction as long as the current state of
the pods matched the budget. The new version requires that after the
eviction, the pods would still match the budget.
Also update tests to match.
Automatic merge from submit-queue
[Controller Manager] Fix endpoint controller hot loop and use utilruntime.HandleError to replace glog.Errorf
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->
**Why**:
Fix endpoint controller hot loop and use `utilruntime.HandleError` to replace `glog.Errorf`
**What**
1. Fix endpoint controller hot loop in `pkg/controller/endpoint`
2. Fix endpoint controller hot loop in `contrib/mesos/pkg/service`
3. Sweep cases of `glog.Errorf` and use `utilruntime.HandleError` instead.
**Which issue this PR fixes**
Fixes#32843
Related issue is #30629
**Special notes for your reviewer**:
@deads2k @derekwaynecarr
The changes on `pkg/controller/endpoints_controller.go` and `contrib/mesos/pkg/service/endpoints_controller.go` are almost the same except `contrib/mesos/pkg/service/endpoints_controller.go` does not pass `podInformer` as the parameter of `NewEndpointController()`.
So, I didn't wait `podStoreSynced` before `syncService()`(Just leave it as it was). Will it lead to a problem?
Automatic merge from submit-queue
change factorization of listers to make them easier to add
`Listers` have a tremendous amount of duplicate code. This factors that out.
@smarterclayton ptal.
Automatic merge from submit-queue
convert daemonset controller to shared informers
Convert the daemonset controller completely to `SharedInformers` for its list/watch resources.
@kubernetes/rh-cluster-infra @ncdc
Automatic merge from submit-queue
Switch ScheduledJob controller to use clientset
**What this PR does / why we need it**:
This is part of #25442. I've applied here the same fix I've applied in the manual client in #29187, see the 1st commit for that (@caesarxuchao we've talked about it in #29856).
@deads2k as promised
@janetkuo ptal
Automatic merge from submit-queue
Add the uid in a delete event to the absentOwnerCache
This is a small optimization to further reduce the traffic sent by the GC.
In #31167, GC caches the non-existent owners when it processes the dirtyQueue. As discovered in #32571, there is still small inefficiency, because there are multiple goroutines processing the dirtyQueue, many of them might send a GET to the apiserver before the cache gets populated.
This PR populates the cache when GC observes an object gets deleted, which happens before the processing of the dirtyQueue, so it avoids the simultaneous GET sent by the GC workers.
cc @lavalamp
Automatic merge from submit-queue
Do not report warning event when an unknown deleter is requested
When Kubernetes does not have a plugin to delete a PV it should wait for
either external deleter or storage admin to delete the volume instead of
throwing an error.
This is the same approach as in #32077
@kubernetes/sig-storage
Previusly github.com/robfig/cron library did not allow passing cron spec without
seconds. Previous commit updates the library, which has additional
method ParseStandard which follows the standard cron spec, iow. minute,
hour, day of month, month, day of week.
Automatic merge from submit-queue
Recombine the condition for the "shouldScale" function
The PR recombine the condition for the "shouldScale" function, abstract the common condition(hpa.Status.LastScaleTime == nil).
Two problems:
1. Get is always using Existing nodes slice, and you will for sure miss any
updated data
2. Each Update duplicates node entry in UpdatedNodes slice
For the 1st, try to find a node in UpdatedNodes slice (same as for the List).
2nd - append only if there is no node with same name as updated, if there is
just replace object.
Change-Id: I9ef1cca2788ba946eee37fa1b037c124ad76074c
When Kubernetes does not have a plugin to delete a PV it should wait for
either external deleter or storage admin to delete the volume instead of
throwing an error.
Related to #32077
Automatic merge from submit-queue
Namespace Controller handles items with finalizers gracefully
This PR does the following:
1. ensures the "orphan" finalizer is not added to items during DELETE COLLECTION calls
2. does not treat presence of a finalizer as an unexpected error condition.
The 15s wait should only happen when finalizers not added by GC are used.
I am aware of any finalizer like that at this time.
Fixes https://github.com/kubernetes/kubernetes/issues/32519
Automatic merge from submit-queue
update error handling for daemoncontroller
Updates the DaemonSet controller to cleanly requeue with ratelimiting on errors, make use of the `utilruntime.HandleError` consistently, and wait for preconditions before doing work.
@ncdc @liggitt @sttts My plan is to use this one as an example of how to handle requeuing, preconditions, and processing error handling.
@foxish fyi
related to https://github.com/kubernetes/kubernetes/issues/30629
Automatic merge from submit-queue
Fix race condition in updating attached volume between master and node
This PR tries to fix issue #29324. The cause of this issue is that a race
condition happens when marking volumes as attached for node status. This
PR tries to clean up the logic of when and where to mark volumes as
attached/detached. Basically the workflow as follows,
1. When volume is attached sucessfully, the volume and node info is
added into nodesToUpdateStatusFor to mark the volume as attached to the
node.
2. When detach request comes in, it will check whether it is safe to
detach now. If the check passes, remove the volume from volumesToReportAsAttached
to indicate the volume is no longer considered as attached now.
Afterwards, reconciler tries to update node status and trigger detach
operation. If any of these operation fails, the volume is added back to
the volumesToReportAsAttached list showing that it is still attached.
These steps should make sure that kubelet get the right (might be
outdated) information about which volume is attached or not. It also
garantees that if detach operation is pending, kubelet should not
trigger any mount operations.
This PR tries to fix issue #29324. This cause of this issue is a race
condition happens when marking volumes as attached for node status. This
PR tries to clean up the logic of when and where to mark volumes as
attached/detached. Basically the workflow as follows,
1. When volume is attached sucessfully, the volume and node info is
added into nodesToUpdateStatusFor to mark the volume as attached to the
node.
2. When detach request comes in, it will check whether it is safe to
detach now. If the check passes, remove the volume from volumesToReportAsAttached
to indicate the volume is no longer considered as attached now.
Afterwards, reconciler tries to update node status and trigger detach
operation. If any of these operation fails, the volume is added back to
the volumesToReportAsAttached list showing that it is still attached.
These steps should make sure that kubelet get the right (might be
outdated) information about which volume is attached or not. It also
garantees that if detach operation is pending, kubelet should not
trigger any mount operations.
Automatic merge from submit-queue
Use PV shared informer in PV controller
Use the PV shared informer, addressing (partially) https://github.com/kubernetes/kubernetes/issues/26247 . Using the PVC shared informer is not so simple because sometimes the controller wants to `Requeue` and...
Automatic merge from submit-queue
Change the eviction metric type and fix rate-limited-timed-queue
People how know better convinced me that aggregate counter is better than a gauge for a number of evictions metric. @Q-Lee
Per discussion with @pwittrock I add a v1.4 label and a cherrypick candidate label. This is a slightly bigger change than I thought, but it fixes a bug in eviction logic, so it's also important.
cc @derekwaynecarr @smarterclayton @timothysc
This allows users to diagnose what's wrong with recycler. Recycler pods are
started automatically with a cryptic name and they are deleted immediately
when they finish.
kubectl describe pods will show:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
59m 59m 1 {persistentvolume-controller } Warning RecyclerPod Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(5421800e-347b-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
53m 53m 1 {persistentvolume-controller } Warning RecyclerPod Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(3c9809e5-347c-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
46m 46m 1 {persistentvolume-controller } Warning RecyclerPod Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(250dd2a2-347d-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
40m 40m 1 {persistentvolume-controller } Warning RecyclerPod Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(0d84ea33-347e-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
33m 33m 1 {persistentvolume-controller } Warning RecyclerPod Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(f5fb63bf-347e-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
27m 27m 1 {persistentvolume-controller } Warning RecyclerPod Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(de7128fd-347f-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
1h 3m 75 {persistentvolume-controller } Normal RecyclerPod Recycler pod: Successfully assigned recycler-for-nfs to 127.0.0.1
1h 3m 76 {persistentvolume-controller } Normal RecyclerPod Recycler pod: Pod was active on the node longer than specified deadline
1h 1m 12 {persistentvolume-controller } Warning RecyclerPod Recycler pod: Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
20m 1m 4 {persistentvolume-controller } Warning RecyclerPod (events with common reason combined)
These steps were necessary:
- added event watcher to volume.RecycleVolumeByWatchingPodUntilCompletion
- pass all these events through volume plugins to volume controller
- rework volume.RecycleVolumeByWatchingPodUntilCompletion unit tests to a table
(too much copy-paste)
- fix all unit tests along the way
Automatic merge from submit-queue
add selfsubjectaccessreview API
Exposes the REST API for self subject access reviews. This allows a user to see whether or not they can perform a particular action.
@kubernetes/sig-auth
with StorageClass.Provisioner == <unknown plugin>, we should wait for
either external provisioner or volume admin to provide a PV for a claim
instead of reporting an error.
Fixes#31723
Automatic merge from submit-queue
Move StorageClass to a storage group
We discussed the pros and cons in sig-api-machinery yesterday. Choosing a particular group name means that clients (including our internal code) require less work and re-swizzling to handle promotions between versions. Even if you choose a group you end up not liking, the amount of work remains the same as the incubator work case: you move the affected kind, resource, and storage.
This moves the `StorageClass` type to the `storage.k8s.io` group (named for consistency with authentication, authorization, rbac, and imagepolicy). There are two commits, one for manaul changes and one for generated code.
Automatic merge from submit-queue
fix log message to include ds name
The pod name is never set because newPod is created a couple lines up without a name. Instead log the name and namespace of the ds which the pod is created from.
also bump the log level because reasons loop get's hit fairly often and does not indicate a bug.
Automatic merge from submit-queue
Sleep between NodeStatus update retries
Just a thing I found when looking into other problems.
This is pretty much no-risk change fixing wrong behavior. Do you think it should go in 1.4? @pwittrock
Node controller's internalPodInformer will block main thread
if it is not started as a go routine. This patch fixed this
by runing internalPodInformer as a go routine.
Automatic merge from submit-queue
Namespace controller deletes pods last
I think this fixes https://github.com/kubernetes/kubernetes/issues/29308 or at least helps further reduce the incidence.
This PR changes the order in which namespace controller prioritizes resources for deletion. It deletes all resources before deleting pods. The rationale for this change is to broadcast deletion of controllers that spawn pods first rather than trip those controllers up into thinking they should spawn more pods which would increase the risk of causing races with the `NamespaceLifecycle` admission plug-in. Many of those controllers also are not rate-limited in the face of rejection, so rather than promote a situation where they are rejected, we promote a situation that removes those things first.
Automatic merge from submit-queue
Post event message for volume attachment
This PR is to add event message when attaching volume fails to help
users to debug. For detach failure, may address in a different PR since
it requires more data structure change.
Automatic merge from submit-queue
Revert "daemonset controller should respect taints"
Reverts kubernetes/kubernetes#31020
We will be unreverting with some modifications after v1.4.
cc @pwittrock @davidopp
This PR is to add event message when attaching volume fails to help
users to debug. For detach failure, may address in a different PR since
it requires more data structure change.
If pet was evicted by kubelet - it will stuck in this state forever.
By analogy to regular pod we need to re-create pet so that it will
be re-scheduled to another node, so in order to re-create pet
and preserve consitent naming we will delete it in petset controller
and create after that.
Change-Id: Ib98bf7f34b3f2ab1582b9de34b5f4c5f84cd5215
Automatic merge from submit-queue
add names for workqueues to gather controller latency/depth metrics
Adding names to the workqueues used by controllers allows the automatic collection of depth, rate, and latency metrics for those controllers. These are useful for diagnosing various "slow controller" cases.
@kubernetes/rh-cluster-infra
Automatic merge from submit-queue
Use updated deployment after rollback
@kubernetes/deployment
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
NONE
```
Automatic merge from submit-queue
persist services need to be retried in service controller cache.
fix issue reported by @anguslees
more detail on #25189
Automatic merge from submit-queue
[GarbageCollector] add absent owner cache
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->
**What this PR does / why we need it**:
Reducing the Request sent to the API server by the garbage collector to check if an owner exists.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
#26120
**Special notes for your reviewer**:
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
```release-note
```
Currently when processing an item in the dirtyQueue, the garbage collector issues GET to check if any of its owners exist. If the owner is a replication controller with 1000 pods, the garbage collector sends a GET for the RC 1000 times. This PR caches the owner's UID if it does not exist according to the API server. This cuts 1/3 of the garbage collection time of the density test in the gce-500 and gce-scale, where the QPS is the bottleneck.
1. When overlapping deployments are discovered, annotate them
2. Expose those overlapping annotations as warnings in kubectl describe
3. Only respect the earliest updated one (skip syncing all other overlapping deployments)
4. Use indexer instead of store for deployment lister
Automatic merge from submit-queue
Add Cluster field in ObjectMeta
There will be no sub-rs, but add `Cluster` field to the ObjectMeta (for all the objects)
"To distinguish the object at the federation level from it's constituents at the cluster level we will add a "Cluster" field to the metadata of all objects (where the federation itself will also have a cluster identifier). That way it is possible to list, interact with, and distinguish between the objects either at the federation level or at the individual cluster level based on the cluster identifier. "
@quinton-hoole @nikhiljindal @deepak-vij @mfanjie @huangyuqi
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
Automatic merge from submit-queue
Add readyReplicas to replica sets
@bgrant0607 for the api changes
@bprashanth for the controllers changes
@deads2k fyi
Automatic merge from submit-queue
Don't bind pre-bound pvc & pv if size request not satisfied
as discussed briefly here https://github.com/kubernetes/kubernetes/pull/30522 , volume size ought to be verified before binding a pv & pvc regardless of what's in the pv's claimRef. @thockin
Automatic merge from submit-queue
Followup fixes for disruption controller.
Part of #12611.
- Record an event when a pod does not have exactly 1 controller.
- Add TODO comment suggesting we simplify the two cases: integer and percentage.
Automatic merge from submit-queue
Bump heapster version
Bump heapster version to v1.2.0-beta.1.
Migrate metrics tests and HPA to use List objects introduced in the new version.
Automatic merge from submit-queue
Avoid failure message flush log when node no longer exist
When node is deleted, attach-detach controller cache may contain stale
information of this node, and update node status fails in reconciler
loop. This message easily flush the log file. This PR is just a quick
fix of this issue. More complete fix including make controller cache
up to date will be addressed in another PR.
Automatic merge from submit-queue
Node controller deletePod return true if there are pods pending deletion
Fixes https://github.com/kubernetes/kubernetes/issues/30536
If a node had a single pod in terminating state, and that node no longer reported healthy, the pod was never deleted by the node controller because it believed there were no pods remaining.
@smarterclayton @ncdc
Automatic merge from submit-queue
prevent RC hotloop on denied pods
If a pod is rejected during creation, the RC controller hot-loops. This can happen most frequently due to insufficient quota.
Automatic merge from submit-queue
Add Events for operation_executor to show status of mounts, failed/successful to show in describe events
Fixes#27590
@saad-ali @pmorie @erinboyd
After talking with @pmorie last week about the above issue, I decided to poke around and see if I could remedy. The refactoring broke my previous UXP merged PR's that correctly showed failed mount errors in the describe events. However, Not sure I implemented correctly, but it tested out and seems to be working, let me know what I missed or if this is not the correct approach.
```
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
2m 2m 1 {default-scheduler } Normal Scheduled Successfully assigned nfs-bb-pod1 to 127.0.0.1
44s 44s 1 {kubelet 127.0.0.1} Warning FailedMount Unable to mount volumes for pod "nfs-bb-pod1_default(a94f64f1-37c9-11e6-9aa5-52540073d346)": timeout expired waiting for volumes to attach/mount for pod "nfs-bb-pod1"/"default". list of unattached/unmounted volumes=[nfsvol]
44s 44s 1 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "nfs-bb-pod1"/"default". list of unattached/unmounted volumes=[nfsvol]
38s 38s 1 {kubelet } Warning FailedMount Unable to mount volumes for pod "a94f64f1-37c9-11e6-9aa5-52540073d346": Mount failed: exit status 32
Mounting arguments: nfs1.rhs:/opt/data99 /var/lib/kubelet/pods/a94f64f1-37c9-11e6-9aa5-52540073d346/volumes/kubernetes.io~nfs/nfsvol nfs []
Output: mount.nfs: Connection timed out
Resolution hint: Check and make sure the NFS Server exists (ensure that correct IPAddress/Hostname was given) and is available/reachable.
Also make sure firewall ports are open on both client and NFS Server (2049 v4 and 2049, 20048 and 111 for v3).
Use commands telnet <nfs server> <port> and showmount <nfs server> to help test connectivity.
```
Automatic merge from submit-queue
Add cluster health metrics to NodeController
Follow up of #28832
This adds metrics to monitor cluster/zone status.
cc @alex-mohr @fabioy @wojtek-t @Q-Lee
Automatic merge from submit-queue
Fix memory leak in gc
ref #30759
GC had a memory leak. The work queue item is never deleted.
I'm still fighting with my kubemark cluster to get statistics after this fix.
@wojtek-t @lavalamp
When node is deleted, attach-detach controller cache may contain stale
information of this node, and update node status fails in reconciler
loop. But one node update failure should not block updating other nodes.
Also the warning message easily flush the log file. This PR is just a quick
fix of this issue. More complete fix including make sure controller cache
up to date will be addressed in another PR.
Automatic merge from submit-queue
Implement dynamic provisioning (beta) of PersistentVolumes via StorageClass
Implemented according to PR #26908. There are several patches in this PR with one huge code regen inside.
* Please review the API changes (the first patch) carefully, sometimes I don't know what the code is doing...
* `PV.Spec.Class` and `PVC.Spec.Class` is not implemented, use annotation `volume.alpha.kubernetes.io/storage-class`
* See e2e test and integration test changes - Kubernetes won't provision a thing without explicit configuration of at least one `StorageClass` instance!
* Multiple provisioning volume plugins can coexist together, e.g. HostPath and AWS EBS. This is important for Gluster and RBD provisioners in #25026
* Contradicting the proposal, `claim.Selector` and `volume.alpha.kubernetes.io/storage-class` annotation are **not** mutually exclusive. They're both used for matching existing PVs. However, only `volume.alpha.kubernetes.io/storage-class` is used for provisioning, configuration of provisioning with `Selector` is left for (near) future.
* Documentation is missing. Can please someone write some while I am out?
For now, AWS volume plugin accepts classes with these parameters:
```
kind: StorageClass
metadata:
name: slow
provisionerType: kubernetes.io/aws-ebs
provisionerParameters:
type: io1
zone: us-east-1d
iopsPerGB: 10
```
* parameters are case-insensitive
* `type`: `io1`, `gp2`, `sc1`, `st1`. See AWS docs for details
* `iopsPerGB`: only for `io1` volumes. I/O operations per second per GiB. AWS volume plugin multiplies this with size of requested volume to compute IOPS of the volume and caps it at 20 000 IOPS (maximum supported by AWS, see AWS docs).
* of course, the plugin will use some defaults when a parameter is omitted in a `StorageClass` instance (`gp2` in the same zone as in 1.3).
GCE:
```
apiVersion: extensions/v1beta1
kind: StorageClass
metadata:
name: slow
provisionerType: kubernetes.io/gce-pd
provisionerParameters:
type: pd-standard
zone: us-central1-a
```
* `type`: `pd-standard` or `pd-ssd`
* `zone`: GCE zone
* of course, the plugin will use some defaults when a parameter is omitted in a `StorageClass` instance (SSD in the same zone as in 1.3 ?).
No OpenStack/Cinder yet
@kubernetes/sig-storage
Automatic merge from submit-queue
Continue on #30774: Change podNamespacer API
continue on #30774, credit to @wojtek-t, Ref #30759
I just fixed a test and converted IsActivePod to operate on *Pod.
Automatic merge from submit-queue
Expose flags for new NodeEviction logic in NodeController
Fix#28832
Last PR from the NodeController NodeEviction logic series.
cc @davidopp @lavalamp @mml
Automatic merge from submit-queue
Nodecontroller doesn't flip readiness on pods if kubeletVersion < 1.2.0
Older versions of the kubelet didn't know how to reconcile pod.Status, so the nodecontroller would mark pods NotReady on netsplit, and if the partition recovered in < 5m, the pods would never get marked Ready resulting in NotReady endpoints indefinitely (till kubelet restart/pod recreate etc).
Automatic merge from submit-queue
Fix default resource limits (node allocatable) for downward api volumes and env vars
@kubernetes/rh-cluster-infra @pmorie @derekwaynecarr
Automatic merge from submit-queue
Add NodeName to EndpointAddress object
Adding a new string type `nodeName` to api.EndpointAddress.
We could also do *ObjectReference to the api.Node object instead, which would be more precise for the future.
```
type ObjectReference struct {
Kind string `json:"kind,omitempty"`
Namespace string `json:"namespace,omitempty"`
Name string `json:"name,omitempty"`
UID types.UID `json:"uid,omitempty"`
APIVersion string `json:"apiVersion,omitempty"`
ResourceVersion string `json:"resourceVersion,omitempty"`
// Optional. If referring to a piece of an object instead of an entire object, this string
// should contain information to identify the sub-object. For example, if the object
// reference is to a container within a pod, this would take on a value like:
// "spec.containers{name}" (where "name" refers to the name of the container that triggered
// the event) or if no container name is specified "spec.containers[2]" (container with
// index 2 in this pod). This syntax is chosen only to have some well-defined way of
// referencing a part of an object.
// TODO: this design is not final and this field is subject to change in the future.
FieldPath string `json:"fieldPath,omitempty"`
}
```
Automatic merge from submit-queue
fix node controller event uid issue
Fix#29289. @smarterclayton ptal. This is not a very elegant fix, if we can use nodeName in log maybe we can set timedValue.Value to node.UID.
Automatic merge from submit-queue
Add volume reconstruct/cleanup logic in kubelet volume manager
Currently kubelet volume management works on the concept of desired
and actual world of states. The volume manager periodically compares the
two worlds and perform volume mount/unmount and/or attach/detach
operations. When kubelet restarts, the cache of those two worlds are
gone. Although desired world can be recovered through apiserver, actual
world can not be recovered which may cause some volumes cannot be cleaned
up if their information is deleted by apiserver. This change adds the
reconstruction of the actual world by reading the pod directories from
disk. The reconstructed volume information is added to both desired
world and actual world if it cannot be found in either world. The rest
logic would be as same as before, desired world populator may clean up
the volume entry if it is no longer in apiserver, and then volume
manager should invoke unmount to clean it up.
Fixes https://github.com/kubernetes/kubernetes/issues/27653
Currently kubelet volume management works on the concept of desired
and actual world of states. The volume manager periodically compares the
two worlds and perform volume mount/unmount and/or attach/detach
operations. When kubelet restarts, the cache of those two worlds are
gone. Although desired world can be recovered through apiserver, actual
world can not be recovered which may cause some volumes cannot be cleaned
up if their information is deleted by apiserver. This change adds the
reconstruction of the actual world by reading the pod directories from
disk. The reconstructed volume information is added to both desired
world and actual world if it cannot be found in either world. The rest
logic would be as same as before, desired world populator may clean up
the volume entry if it is no longer in apiserver, and then volume
manager should invoke unmount to clean it up.
Automatic merge from submit-queue
Move new etcd storage (low level storage) into cacher
In an effort for #29888, we are pushing forward this:
What?
- It changes creating etcd storage.Interface impl into creating config
- In creating cacher storage (StorageWithCacher), it passes config created above and new etcd storage inside.
Why?
- We want to expose the information of (etcd) kv client to cacher. Cacher storage uses this information to talk to remote storage.
Automatic merge from submit-queue
Endpoint controller logs errors during normal cluster behavior
The endpoint controller logs an error when its forbidden from creating new endpoints during namespace termination. This is normal cluster behavior, and therefore should not be logged. This confuses operators administrating the cluster.
Updated to log at a lower level in response to a forbidden message when performing a create operation. In case of an error on the API server side of the house, I continue to requeue the key. It should be ignored in a future syncService call once the service is deleted as part of namespace termination.
See https://bugzilla.redhat.com/show_bug.cgi?id=1347425
/cc @kubernetes/rh-cluster-infra
Automatic merge from submit-queue
Name jobs created by sj deterministically
```release-note
Name the job created by scheduledjob (sj) deterministically with sj's name and a hash of job's scheduled time.
```
@erictune @soltysh
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/30420)
<!-- Reviewable:end -->
Automatic merge from submit-queue
the observed usage should match those that have hard constraints
in the sync process, the quota will be replenished, the new observed usage will be sumed from each evaluator, if the previousUsed set is not be cleared, the new usage will be dirty, maybe some unusage resource still in , as the code below
newUsage = quota.Mask(newUsage, matchedResources)
for key, value := range newUsage {
usage.Status.Used[key] = value
}
so i think here shoul not set value previousUsed
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/29653)
<!-- Reviewable:end -->
Automatic merge from submit-queue
[GarbageCollector] measure latency
First commit is #27600.
In e2e tests, I measure the average time an item spend in the eventQueue(~1.5 ms), dirtyQueue(~13ms), and orphanQueue(~37ms). There is no stress test in e2e yet, so the number may not be useful.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/28387)
<!-- Reviewable:end -->
Automatic merge from submit-queue
add metrics for workqueues
Adds prometheus metrics to work queues and enables them for the resourcequota controller. It would be easy to add this to all other workqueue based controllers and gather basic responsiveness metrics.
@kubernetes/rh-cluster-infra helps debug quota controller responsiveness problems.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/30296)
<!-- Reviewable:end -->
Automatic merge from submit-queue
pkg/controller/garbagecollector: simplify mutexes.
pkg/controller/garbagecollector: simplified synchronization and made idiomatic.
Similar to #29598, we can rely on the zero-value construction behavior
to embed `sync.Mutex` into parent structs.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/29898)
<!-- Reviewable:end -->
Automatic merge from submit-queue
HPA: ignore scale targets whose replica count is 0
Disable HPA when the user (or another component) explicitly sets the replicas to 0.
Fixes#28603
@kubernetes/autoscaling @fgrzadkowski @kubernetes/rh-cluster-infra @smarterclayton @ncdc
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/29212)
<!-- Reviewable:end -->
Also, fix unit tests to have the same claim and volume sizes in most of the
tests where we don't test matching based on size and test for a specific size
when we do actually test the matching.
Automatic merge from submit-queue
Run goimport for the whole repo
While removing GOMAXPROC and running goimports, I noticed quite a lot of other files also needed a goimport format. Didn't commit `*.generated.go`, `*.deepcopy.go` or files in `vendor`
This is more for testing if it builds.
The only strange thing here is the gopkg.in/gcfg.v1 => github.com/scalingdata/gcfg replace.
cc @jfrazelle @thockin
Automatic merge from submit-queue
optimize conditions of ServiceReplenishmentUpdateFunc to replenish service
Originally, the replenishQuota method didn't focus on the third parameter object even if others transfered to it, i think the function is not efficient and perfect. then i use the third param to get MatchResources, it will be more exact. for example, if the old pod was quota tracked and the new was not, the replenishQuota only focus on usage resource of the old pod, still if the third parameter object is nil, the process will be same as before
Automatic merge from submit-queue
refactor quota calculation for re-use
Refactors quota calculation to allow reuse. This will allow us to do "punch through" calculation inside of admission if a particular quota needs usage stats and allows downstream re-use by moving calculation closer to the evaluators and separating "needs calculation" logic from "do calculation".
@derekwaynecarr
Automatic merge from submit-queue
Rewrite service controller to apply best controller pattern
This PR is a long term solution for #21625:
We apply the same pattern like replication controller to service controller to avoid the potential process order messes in service controller, the change includes:
1. introduce informer controller to watch service changes from kube-apiserver, so that every changes on same service will be kept in serviceStore as the only element.
2. put the service name to be processed to working queue
3. when process service, always get info from serviceStore to ensure the info is up-to-date
4. keep the retry mechanism, sleep for certain interval and add it back to queue.
5. remote the logic of reading last service info from kube-apiserver before processing the LB info as we trust the info from serviceStore.
The UT has been passed, manual test passed after I hardcode the cloud provider as FakeCloud, however I am not able to boot a k8s cluster with any available cloudprovider, so e2e test is not done.
Submit this PR first for review and for triggering a e2e test.
Automatic merge from submit-queue
Fix deployment e2e test: waitDeploymentStatus should error when entering an invalid state
Follow up #28162
1. We should check that max unavailable and max surge aren't violated at all times in e2e tests (didn't check this in deployment scaled rollout yet, but we should wait for it to become valid and then continue do the check until it finishes)
2. Fix some minor bugs in e2e tests
@kubernetes/deployment
Automatic merge from submit-queue
Stabilize volume unit tests by waiting for exact state
Wait for specific final state instead of waiting for specific number of
operations in controller unit tests. The tests are more readable and will survive
random goroutine ordering (PV and PVC controller have both their own
goroutine).
@kubernetes/sig-storage
Automatic merge from submit-queue
add union registry for quota
Adds the ability to combine multiple quota registries together. Kube needs this for other types.
@derekwaynecarr
Automatic merge from submit-queue
Error out when any RS has more available pods then its spec replicas
Fixes#29559 (hopefully, if not the bot will open new issues for us)
@kubernetes/deployment
Automatic merge from submit-queue
pkg/various: plug leaky time.New{Timer,Ticker}s
According to the documentation for Go package time, `time.Ticker` and
`time.Timer` are uncollectable by garbage collector finalizers. They
leak until otherwise stopped. This commit ensures that all remaining
instances are stopped upon departure from their relative scopes.
Similar efforts were incrementally done in #29439 and #29114.
```release-note
* pkg/various: plugged various time.Ticker and time.Timer leaks.
```
Automatic merge from submit-queue
pkg/controller/node/nodecontroller: simplify mutex
Similar to #29598, we can rely on the zero-value construction behavior
to embed `sync.Mutex` into parent structs.
/CC: @saad-ali
Automatic merge from submit-queue
make the resource prefix in etcd configurable for cohabitation
This looks big, its not as bad as it seems.
When you have different resources cohabiting, the resource name used for the etcd directory needs to be configurable. HPA in two different groups worked fine before. Now we're looking at something like RC<->RS. They normally store into two different etcd directories. This code allows them to be configured to store into the same location.
To maintain consistency across all resources, I allowed the `StorageFactory` to indicate which `ResourcePrefix` should be used inside `RESTOptions` which already contains storage information.
@lavalamp affects cohabitation.
@smarterclayton @mfojtik prereq for our rc<->rs and d<->dc story.
For non-attachable volumes, do not call GetVolumeName on the plugin and instead
generate a unique name based on the identity of the pod and the name of the volume
within the pod.
Automatic merge from submit-queue
Add an Azure CloudProvider Implementation
This PR adds `Azure` as a cloudprovider provider for Kubernetes. It specifically adds support for native pod networking (via Azure User Defined Routes) and L4 Load Balancing (via Azure Load Balancers).
I did have to add `clusterName` as a parameter to the `LoadBalancers` methods. This is because Azure only allows one "LoadBalancer" object per set of backend machines. This means a single "LoadBalancer" object must be shared across the cluster. The "LoadBalancer" is named via the `cluster-name` parameter passed to `kube-controller-manager` so as to enable multiple clusters per resource group if the user desires such a configuration.
There are few things that I'm a bit unsure about:
1. The implementation of the `Instances` interface. It's not extensively documented, it's not really clear what the different functions are used for, and my questions on the ML didn't get an answer.
2. Counter to the comments on the `LoadBalancers` Interface, I modify the `api.Service` object in `EnsureLoadBalancerDeleted`, but not with the intention of affecting Kube's view of the Service. I simply do it so that I can remove the `Port`s on the `Service` object and then re-use my reconciliation logic that can handle removing stale/deleted Ports.
3. The logging is a bit verbose. I'm looking for guidance on the appropriate log level to use for the chattier bits.
Due to the (current) lack of Instance Metadata Service and lack of Virtual Machine Identity in Azure, the user is required to do a few things to opt-in to this provider. These things are called-out as they are in contrast to AWS/GCE:
1. The user must provision an Azure Active Directory ServicePrincipal with `Contributor` level access to the resource group that the cluster is deployed in. This creation process is documented [by Hashicorp](https://www.packer.io/docs/builders/azure-setup.html) or [on the MSDN Blog](https://blogs.msdn.microsoft.com/arsen/2016/05/11/how-to-create-and-test-azure-service-principal-using-azure-cli/).
2. The user must place a JSON file somewhere on each Node that conforms to the `AzureConfig` struct defined in `azure.go`. (This is automatically done in the Azure flavor of [Kubernetes-Anywhere](https://github.com/kubernetes/kubernetes-anywhere).)
3. The user must specify `--cloud-config=/path/to/azure.json` as an option to `kube-apiserver` and `kube-controller-manager` similarly to how the user would need to pass `--cloud-provider=azure`.
I've been running approximately this code for a month and a half. I only encountered one bug which has since been fixed and covered by a unit test. I've just deployed a new cluster (and a Type=LoadBalancer nginx Service) using this code (via `kubernetes-anywhere`) and have posted [the `kube-controller-manager` logs](https://gist.github.com/colemickens/1bf6a26e7ef9484a72a30b1fcf9fc3cb) for anyone who is interested in seeing the logs of the logic.
If you're interested in this PR, you can use the instructions in my [`azure-kubernetes-demo` repository](https://github.com/colemickens/azure-kubernetes-demo) to deploy a cluster with minimal effort via [`kubernetes-anywhere`](https://github.com/kubernetes/kubernetes-anywhere). (There is currently [a pending PR in `kubernetes-anywhere` that is needed](https://github.com/kubernetes/kubernetes-anywhere/pull/172) in conjuncture with this PR). I also have a pre-built `hyperkube` image: `docker.io/colemickens/hyperkube-amd64:v1.4.0-alpha.0-azure`, which will be kept in sync with the branch this PR stems from.
I'm hoping this can land in the Kubernetes 1.4 timeframe.
CC (potential code reviewers from Azure): @ahmetalpbalkan @brendandixon @paulmey
CC (other interested Azure folk): @brendandburns @johngossman @anandramakrishna @jmspring @jimzim
CC (others who've expressed interest): @codefx9 @edevil @thockin @rootfs
According to the documentation for Go package time, `time.Ticker` and
`time.Timer` are uncollectable by garbage collector finalizers. They
leak until otherwise stopped. This commit ensures that all remaining
instances are stopped upon departure from their relative scopes.
Automatic merge from submit-queue
To break the loop when object found in removeOrphanFinalizer()
To break the loop when object found in removeOrphanFinalizer()
Automatic merge from submit-queue
Allow shareable resources for admission control plugins.
Changes allow admission control plugins to share resources. This is done via new PluginInitialization structure. The structure can be extended for other resources, for now it is an shared informer for namespace plugins (NamespiceLifecycle, NamespaceAutoProvisioning, NamespaceExists).
If a plugins needs some kind of shared resource e.g. client, the client shall be added to PluginInitializer and Wants methods implemented to every plugin which will use it.
Automatic merge from submit-queue
controller/service: minor cleanup
1. always handle short case first for if statement
2. do not capitalize error message
3. put the mutex before the fields it protects
4. prefer switch over if elseif.
Automatic merge from submit-queue
use a separate queue for initial quota calculation
When the quota controller gets backed up on resyncs, it can take a long time to observe the first usage stats which are needed by the admission plugin. This creates a second queue to prioritize the initial calculation.
Automatic merge from submit-queue
Fix "PVC Volume not detached if pod deleted via namespace deletion" issue
Fixes#29051: "PVC Volume not detached if pod deleted via namespace deletion"
This PR:
* Fixes a bug in `desired_state_of_the_world_populator.go` to check the value of `exists` returned by the `podInformer` so that it can delete pods even if the delete event is missed (or fails).
* Reduces the desired state of the world populators sleep period from 5 min to 1 min (reducing the amount of time a volume would remain attached if a volume delete event is missed or fails).
Automatic merge from submit-queue
Allow mounts to run in parallel for non-attachable volumes
This PR:
* Fixes https://github.com/kubernetes/kubernetes/issues/28616
* Enables mount volume operations to run in parallel for non-attachable volume plugins.
* Enables unmount volume operations to run in parallel for all volume plugins.
* Renames `GoRoutineMap` to `GoroutineMap`, resolving a long outstanding request from @thockin: `"Goroutine" is a noun`
When a new rollout with a different size than the previous size of the
deployment is initiated then only the new replica set will notice the
new size. Old replica sets are not updated by the rollout path.