Automatic merge from submit-queue
Add storageClass.mountOptions and use it in all applicable plugins
split off from https://github.com/kubernetes/kubernetes/pull/50919 and still dependent on it. cc @gnufied
issue: https://github.com/kubernetes/features/issues/168
```release-note
Add mount options field to StorageClass. The options listed there are automatically added to PVs provisioned using the class.
```
Automatic merge from submit-queue (batch tested with PRs 44719, 48454)
check job ActiveDeadlineSeconds
**What this PR does / why we need it**:
enqueue a sync task after ActiveDeadlineSeconds
**Which issue this PR fixes** *:
fixes#32149
**Special notes for your reviewer**:
**Release note**:
```release-note
enqueue a sync task to wake up jobcontroller to check job ActiveDeadlineSeconds in time
```
Automatic merge from submit-queue (batch tested with PRs 44719, 48454)
Fix handling of APIserver errors when saving provisioned PVs.
When API server crashes *after* saving a provisioned PV and before sending
200 OK, the controller tries to save the PV again. In this case, it gets
AlreadyExists error, which should be interpreted as success and not as error.
Especially, a volume that corresponds to the PV should not be deleted in the
underlying storage.
Fixes#44372
```release-note
NONE
```
@kubernetes/sig-storage-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 50919, 51410, 50099, 51300, 50296)
Remove failure check from deployment controller
@kubernetes/sig-apps-pr-reviews this check is useless w/o automatic rollback so I am removing it.
When API server crashes *after* saving a provisioned PV and before sending
200 OK, the controller tries to save the PV again. In this case, it gets
AlreadyExists error, which should be interpreted as success and not as error.
Especially, a volume that corresponds to the PV should not be deleted in the
underlying storage.
Automatic merge from submit-queue (batch tested with PRs 51441, 51356, 51460)
Don't update pvc.status.capacity if pvc is already Bound
As discussed here https://github.com/kubernetes/community/pull/657#discussion_r128008128, in order for `pvc.status.Capacity < pv.Spec.Capcity` to be the mechanism for volume filesystem* resize, the pv controller should stop updating pvc.status.Capacity every resync period.
/assign @jsafrane
/sig storage
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51441, 51356, 51460)
fix the bad position of code comment
**What this PR does / why we need it**:
The position of code comment is wrong and move it to the right position
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```
NONE
```
Automatic merge from submit-queue
Add volume operation metrics to operation executor and PV controller
This PR implements the proposal for high level volume metrics https://github.com/kubernetes/community/pull/809
**Special notes for your reviewer**:
~Differences from proposal:~ all resolved
~"verify_volume" is now "verify_volumes_are_attached" + "verify_volumes_are_attached_per_node" + "verify_controller_attached_volume." Which of them do we want?~
~There is no "mount_device" metric because the MountVolume operation combines MountDevice and mount (plugin.Setup). Do we want to extract the mount_device metric or is it okay to keep mountvolume as one? For attachable volumes, MountDevice is the actual mount and Setup is a bindmount + setvolumeownership. For unattachable, mountDevice does not occur and Setup is an actual mount + setvolumeownership.~
~PV controller metrics I did not implement following the proposal at all. I did not change goroutinemap nor scheduleOperation. Because provisionClaimOperation does not return an error, so it's impossible for the caller to know if there is actually a failure worth reporting. So I manually create a new metric inside the function according to some conditions.~
@gnufied
I have tested the operationexecutor metrics but not provision & delete. Sample:
![screen shot 2017-08-02 at 15 01 08](https://user-images.githubusercontent.com/13111288/28889980-a7093526-7793-11e7-9aa9-ad7158be76fa.png)
**Release note**:
```release-note
Add error count and time-taken metrics for storage operations such as mount and attach, per-volume-plugin.
```
Automatic merge from submit-queue
simplify disruption controller finder logic
**What this PR does / why we need it**:
Address some comments from https://github.com/kubernetes/kubernetes/pull/45003 and simplify the PDB controller logic as part of issue https://github.com/kubernetes/kubernetes/issues/42284
@enisoc @kargakis @caesarxuchao
Also it feels like we can get rid of the finders all together since with controller ref, each pod has only controller. Let me know if i should remove that finders all together ?
This change is prerequisite for implementing iSCSI attacher
and detacher.
In order to use chap authentication at iSCSI plugin after
implementing attacher and detacher, secret is needed at
AttachDisk() which is called from WaitForAttach().
To obtain secret, pod information is required, but
WaitForAttach() doesn't pass pod information inside.
This patch adds 'pod' as an argument of WaitForAttach()
and adds changes to drivers who implements WaitForAttach().
Fixes#48953
Automatic merge from submit-queue (batch tested with PRs 51391, 51338, 51340, 50773, 49599)
add an starting info log of namespace controller.
**What this PR does / why we need it**:
add an starting info log of namespace controller.
**Release note**:
NA
Automatic merge from submit-queue (batch tested with PRs 51174, 51363, 51087, 51382, 51388)
Add InstanceExistsByProviderID to cloud provider interface for CCM
**What this PR does / why we need it**:
Currently, [`MonitorNode()`](02b520f0a4/pkg/controller/cloud/nodecontroller.go (L240)) in the node controller checks with the CCM if a node still exists by calling `ExternalID(nodeName)`. `ExternalID` is supposed to return the provider id of a node which is not supported on every cloud. This means that any clouds who cannot infer the provider id by the node name from a remote location will never remove nodes that no longer exist.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50985
**Special notes for your reviewer**:
We'll want to create a subsequent issue to track the implementation of these two new methods in the cloud providers.
**Release note**:
```release-note
Adds `InstanceExists` and `InstanceExistsByProviderID` to cloud provider interface for the cloud controller manager
```
/cc @wlan0 @thockin @andrewsykim @luxas @jhorwit2
/area cloudprovider
/sig cluster-lifecycle
Automatic merge from submit-queue (batch tested with PRs 51054, 51101, 50031, 51296, 51173)
Dynamic Flexvolume plugin discovery, probing with filesystem watch.
**What this PR does / why we need it**: Enables dynamic Flexvolume plugin discovery. This model uses a filesystem watch (fsnotify library), which notifies the system that a probe is necessary only if something changes in the Flexvolume plugin directory.
This PR uses the dependency injection model in https://github.com/kubernetes/kubernetes/pull/49668.
**Release Note**:
```release-note
Dynamic Flexvolume plugin discovery. Flexvolume plugins can now be discovered on the fly rather than only at system initialization time.
```
/sig-storage
/assign @jsafrane @saad-ali
/cc @bassam @chakri-nelluri @kokhang @liggitt @thockin
Automatic merge from submit-queue (batch tested with PRs 49850, 47782, 50595, 50730, 51341)
Cloud Controller Manager now sets Node.Spec.ProviderID
**What this PR does / why we need it**:
Cloud Controller Manager now sets `Node.Spec.ProviderID`.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
https://github.com/kubernetes/kubernetes/issues/49836.
**Special notes for your reviewer**:
* As part of an effort to move cloud controller manager into beta https://github.com/kubernetes/kubernetes/issues/48690.
Automatic merge from submit-queue (batch tested with PRs 49850, 47782, 50595, 50730, 51341)
NodeConditionPredicates should return NodeOutOfDisk error.
**What this PR does / why we need it**:
In https://github.com/kubernetes/kubernetes/pull/49932 , I moved node condition check into a predicates; but it return incorrect error :(.
We also need to add more cases to `TestNodeShouldRunDaemonPod` which is key function of DaemonSet.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50594
**Release note**:
```release-note
None
```
Automatic merge from submit-queue (batch tested with PRs 50033, 49988, 51132, 49674, 51207)
StatefulSet kubectl rollout command
**What this PR does / why we need it**: This PR implements StatefulSet kubectl rollout command, covering `history`, `status`, and `undo`.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#49890
**Special notes for your reviewer**:
**Release note**:
```release-note
kubectl rollout `history`, `status`, and `undo` subcommands now support StatefulSets.
```
Automatic merge from submit-queue (batch tested with PRs 50213, 50707, 49502, 51230, 50848)
Fix comment of cronjob utils.go
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/kubernetes/kubernetes/issues/50951
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 51108, 51035, 50539, 51160, 50947)
Delete load balancers if the UIDs for services don't match.
An attempt to fix https://github.com/kubernetes/kubernetes/issues/43730
@thockin @djsly
Automatic merge from submit-queue
Allow attach of volumes to multiple nodes for vSphere
This is a fix for issue #50944 which doesn't allow a volume to be attached to a new node after the node is powered off where the volume was previously attached.
Current behaviour:
One of the cluster worker nodes was powered off in vCenter.
Pods running on this node have been rescheduled on different nodes but got stuck in ContainerCreating. It failed to attach the volume on the new node with error "Multi-Attach error for volume pvc-xxx, Volume is already exclusively attached to one node and can't be attached to another" and hence the application running in the pod has no data available because the volume is not attached to the new node. Since the volume is still attached to powered off node, any attempt to attach the volume on the new node failed with error "Multi-Attach error". It's stuck for 6 minutes until attach/detach controller forcefully tried to detach the volume on the powered off node. After the end of 6 minutes when volume is detached on powered off node, the volume is now successfully attached on the new node and application has now the data available.
What is expected to happen:
I would want the attach/detach controller to go ahead with the attach of the volume on new node where the pod got provisioned instead of waiting for the volume to be detached on the powered off node. It is ok to eventually delete the volume on the powered off node after 6 minutes. This way the application downtime is low and pods are up as soon as possible.
The current fix ignore, vSphere volumes/persistent volume to check for multi-attach scenario in attach/detach controller.
@jingxu97 @saad-ali : Can you please take a look at it.
@tusharnt @divyenpatel @rohitjogvmw @luomiao
```release-note
Allow attach of volumes to multiple nodes for vSphere
```
Automatic merge from submit-queue (batch tested with PRs 38947, 50239, 51115, 51094, 51116)
Fix comment and typos in node_controller
**What this PR does / why we need it**:
1. fix comment to more accurately
2. fix typos
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```
NONE
```
Automatic merge from submit-queue (batch tested with PRs 50980, 46902, 51051, 51062, 51020)
fix confusion in service_controller
**What this PR does / why we need it**:
Fix code and comment confusion in `service_controller`.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#51009
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 50980, 46902, 51051, 51062, 51020)
Fix swallowed errors in statefulset tests
**What this PR does / why we need it**: Fixes errors that were being swallowed in the tests of the statefulset package.
```release-note NONE
```
Automatic merge from submit-queue (batch tested with PRs 50806, 48789, 49922, 49935, 50438)
On AttachDetachController node status update, do not retry when node …
…doesn't exist but keep the node entry in cache.
**What this PR does / why we need it**: An alternative fix for https://github.com/kubernetes/kubernetes/issues/42438 which also fixes#50721.
Instead of removing the node entry entirely from the node status update cache (which prevents the node from ever being updated even when it recovers), here the node status updater does nothing, so that there won't be an update retry until the node is re-added, where the cache entry is set to true.
Will cherry pick to prior versions after this is merged.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#50721
**Release Note**:
``` release-note
On AttachDetachController node status update, do not retry when node doesn't exist but keep the node entry in cache.
```
/assign @jingxu97
/cc @saad-ali
/sig storage
/release-note
A pod status change of unready -> ready results in a move from
the endpoint's unready endpoint addresses to its ready addresses
so if a pod update contains an unready -> ready status change,
the endpoint needs to be updated.
Automatic merge from submit-queue (batch tested with PRs 51102, 50712, 51037, 51044, 51059)
fix#51043
**What this PR does / why we need it**: The StatefulSet controller no longer attempts to mutate "hostname" or "subdomain" fields of the "pod.spec" to enforce the network identity of Pods in a StatefeulSet. Since these fields are set upon creation and immutable thereafter setting the annotations is no longer necessary.
fixes: #51043
Automatic merge from submit-queue (batch tested with PRs 46458, 50934, 50766, 50970, 47698)
Skip non-update endpoint updates
**What this PR does / why we need it**:
On large clusters, a large percentage of endpoint updates are actually non-updates that occur as a result of a change in an associated pod. This results in endpoint updates where the only field that has changed is the `TargetRef.ResourceVersion` in the endpoint address associated with the changed pod. Given enough of these non-updates, the endpoint controller's queue rate limit can be overwhelmed and legitimate updates can be delayed, resulting in (temporarily) broken services. We have clusters where we've seen endpoint updates take 9 minutes.
**Which issue this PR fixes** : fixes#50936
**Special notes for your reviewer**:
N/A
**Release note**:
```release-note
Prevent unneeded endpoint updates
```
Automatic merge from submit-queue (batch tested with PRs 46458, 50934, 50766, 50970, 47698)
Prepare VolumeHost for running mount tools in containers
This is the first part of implementation of https://github.com/kubernetes/features/issues/278 - running mount utilities in containers.
It updates `VolumeHost` interface:
* `GetMounter()` now requires volume plugin name, as it is going to return different mounter to different volume plugings, because mount utilities for these plugins can be on different places.
* New `GetExec()` method that should volume plugins use to execute any utilities. This new `Exec` interface will execute them on proper place.
* `SafeFormatAndMount` is updated to the new `Exec` interface.
This is just a preparation, `GetExec` right now leads to simple `os.Exec` and mount utilities are executed on the same place as before. Also, the volume plugins will be updated in subsequent PRs (split into separate PRs, some plugins required lot of changes).
```release-note
NONE
```
@kubernetes/sig-storage-pr-reviews
@rootfs @gnufied