Automatic merge from submit-queue (batch tested with PRs 60073, 58519, 61860). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
flexvolume prober: trigger plugin init only for the relevant plugin
**What this PR does / why we need it**:
The automatic discovery trigger init only to the specific plugin directory that was updated, and not to all the plugins in the flexvolume plugin directory.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#58352
**Special notes for your reviewer**:
NONE
**Release note**:
```
NONE
```
Automatic merge from submit-queue (batch tested with PRs 46973, 48556)
Improve error reporting when flex driver has failed to initialize
**What this PR does / why we need it**:
This PR improves error reporting for the case when flex driver is failing to initialize. There are 2 improvements:
1) show only the plugin name instead of a full struct. This makes a message shorter and removes useless and internal information.
Before:
>E0605 16:44:59.330215 26786 plugins.go:359] Failed to load volume plugin &{k8s/nfs /usr/libexec/kubernetes/kubelet-plugins/volume/exec/k8s~nfs %!s(*kubelet.kubeletVolumeHost=&{0xc431ea5800 {{1 0} map[kubernetes.io/downward-api:0xc431ee3f20 kubernetes.io/aws-ebs:0xc431ee3eb0 kubernetes.io/git-repo:0xc431ee3ef0 kubernetes.io/host-path:0xc430e985f0 kubernetes.io/rbd:0xc42bfab840 kubernetes.io/quobyte:0xc431ee3f00 kubernetes.io/fc:0xc42bfab980 kubernetes.io/empty-dir:0xc431ee3ed0 kubernetes.io/nfs:0xc430e98640 kubernetes.io/iscsi:0xc42bfab720 kubernetes.io/glusterfs:0xc430faaba0 kubernetes.io/cinder:0xc42bfab8c0 kubernetes.io/gce-pd:0xc431ee3ee0 kubernetes.io/secret:0xc42bfab6a0 kubernetes.io/flocker:0xc431ee3f30 kubernetes.io/cephfs:0xc431ee3f10]} 0xc42698cf40}) %!s(*exec.executor=&{}) {%!s(int32=0) %!s(uint32=0)} []}, error: unexpected end of JSON input
After:
>E0605 16:59:45.520185 29041 plugins.go:359] Failed to load volume plugin k8s/nfs, error: unexpected end of JSON input
2) quote script output. In case the output was empty, messages look a bit better:
Before:
> E0605 16:44:59.330077 26786 driver-call.go:212] Failed to unmarshal output for command: init, **output: **, error: unexpected end of JSON input
> W0605 16:44:59.330170 26786 driver-call.go:140] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/k8s\~nfs/nfs, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/k8s\~nfs/nfs: permission denied, **output: **
After:
>E0605 16:59:45.519906 29041 driver-call.go:212] Failed to unmarshal output for command: init, **output: ""**, error: unexpected end of JSON input
>W0605 16:59:45.520109 29041 driver-call.go:140] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/k8s\~nfs/nfs, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/k8s\~nfs/nfs: permission denied, **output: ""**
Automatic merge from submit-queue (batch tested with PRs 44601, 44842, 44893, 44491, 44588)
Define const annotation variable once
We do not need to define the const annotation var twice in pkg/volume and pkg/volume/validation
**Release note**:
```release-note
NONE
```
When the attach/detach controller crashes and a pod with attached PV is deleted
afterwards the controller will never detach the pod's attached volumes. To
prevent this the controller should try to recover the state from the nodes
status.
This implements Bulk volume polling using ideas presented by
justin in https://github.com/kubernetes/kubernetes/pull/39564
But it changes the implementation to use an interface
and doesn't affect other implementations.
This fix prevents the PV controller from forcefully overwriting the provisioned volume's name with the generated PV name. Instead, it allows dynamic provisioner implementers to set the name of the volume to a value that they choose.
The kube-controller-manager has two command line arguments (--pv-recycler-pod-template-filepath-hostpath and --pv-recycler-pod-template-filepath-nfs) that specify a recycle pod template. The recycle pod template may not contain the volume that shall be recycled.
A check is added to make sure that the recycle pod template contains at least a volume.
Gluster provisioner is interested in pvc.Namespace and I don't want to add
at as a new field in VolumeOptions - it would contain almost whole PVC.
Let's pass direct reference to PVC instead and let the provisioner to pick
information it is interested in.
This allows users to diagnose what's wrong with recycler. Recycler pods are
started automatically with a cryptic name and they are deleted immediately
when they finish.
kubectl describe pods will show:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
59m 59m 1 {persistentvolume-controller } Warning RecyclerPod Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(5421800e-347b-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
53m 53m 1 {persistentvolume-controller } Warning RecyclerPod Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(3c9809e5-347c-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
46m 46m 1 {persistentvolume-controller } Warning RecyclerPod Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(250dd2a2-347d-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
40m 40m 1 {persistentvolume-controller } Warning RecyclerPod Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(0d84ea33-347e-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
33m 33m 1 {persistentvolume-controller } Warning RecyclerPod Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(f5fb63bf-347e-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
27m 27m 1 {persistentvolume-controller } Warning RecyclerPod Recycler pod: Unable to mount volumes for pod "recycler-for-nfs_default(de7128fd-347f-11e6-a79b-3c970e965218)": timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
1h 3m 75 {persistentvolume-controller } Normal RecyclerPod Recycler pod: Successfully assigned recycler-for-nfs to 127.0.0.1
1h 3m 76 {persistentvolume-controller } Normal RecyclerPod Recycler pod: Pod was active on the node longer than specified deadline
1h 1m 12 {persistentvolume-controller } Warning RecyclerPod Recycler pod: Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "recycler-for-nfs"/"default". list of unattached/unmounted volumes=[vol]
20m 1m 4 {persistentvolume-controller } Warning RecyclerPod (events with common reason combined)
These steps were necessary:
- added event watcher to volume.RecycleVolumeByWatchingPodUntilCompletion
- pass all these events through volume plugins to volume controller
- rework volume.RecycleVolumeByWatchingPodUntilCompletion unit tests to a table
(too much copy-paste)
- fix all unit tests along the way
Currently kubelet volume management works on the concept of desired
and actual world of states. The volume manager periodically compares the
two worlds and perform volume mount/unmount and/or attach/detach
operations. When kubelet restarts, the cache of those two worlds are
gone. Although desired world can be recovered through apiserver, actual
world can not be recovered which may cause some volumes cannot be cleaned
up if their information is deleted by apiserver. This change adds the
reconstruction of the actual world by reading the pod directories from
disk. The reconstructed volume information is added to both desired
world and actual world if it cannot be found in either world. The rest
logic would be as same as before, desired world populator may clean up
the volume entry if it is no longer in apiserver, and then volume
manager should invoke unmount to clean it up.