flexVolumeMounter.SetUpAt creates a directory, where a volume has to be
mounted, and we have to remove this directory in error cases. Otherwise
we will see errors like this:
Orphaned pod 673b66d9-70f7-11e7-a5fa-525400307392 found, but volume paths are still present on disk
https://github.com/kubernetes/kubernetes/issues/49383
Signed-off-by: Andrei Vagin <avagin@virtuozzo.com>
For bind-mounted directories, the isNotMounted which calls
IsLikelyNotMountPoint fails because the filesystem of the mounted
location and the parent directory are the same.
Addressing:
unmounter.go:59] Warning: Path: /path/.../test-dir already unmounted
Automatic merge from submit-queue (batch tested with PRs 46973, 48556)
Improve error reporting when flex driver has failed to initialize
**What this PR does / why we need it**:
This PR improves error reporting for the case when flex driver is failing to initialize. There are 2 improvements:
1) show only the plugin name instead of a full struct. This makes a message shorter and removes useless and internal information.
Before:
>E0605 16:44:59.330215 26786 plugins.go:359] Failed to load volume plugin &{k8s/nfs /usr/libexec/kubernetes/kubelet-plugins/volume/exec/k8s~nfs %!s(*kubelet.kubeletVolumeHost=&{0xc431ea5800 {{1 0} map[kubernetes.io/downward-api:0xc431ee3f20 kubernetes.io/aws-ebs:0xc431ee3eb0 kubernetes.io/git-repo:0xc431ee3ef0 kubernetes.io/host-path:0xc430e985f0 kubernetes.io/rbd:0xc42bfab840 kubernetes.io/quobyte:0xc431ee3f00 kubernetes.io/fc:0xc42bfab980 kubernetes.io/empty-dir:0xc431ee3ed0 kubernetes.io/nfs:0xc430e98640 kubernetes.io/iscsi:0xc42bfab720 kubernetes.io/glusterfs:0xc430faaba0 kubernetes.io/cinder:0xc42bfab8c0 kubernetes.io/gce-pd:0xc431ee3ee0 kubernetes.io/secret:0xc42bfab6a0 kubernetes.io/flocker:0xc431ee3f30 kubernetes.io/cephfs:0xc431ee3f10]} 0xc42698cf40}) %!s(*exec.executor=&{}) {%!s(int32=0) %!s(uint32=0)} []}, error: unexpected end of JSON input
After:
>E0605 16:59:45.520185 29041 plugins.go:359] Failed to load volume plugin k8s/nfs, error: unexpected end of JSON input
2) quote script output. In case the output was empty, messages look a bit better:
Before:
> E0605 16:44:59.330077 26786 driver-call.go:212] Failed to unmarshal output for command: init, **output: **, error: unexpected end of JSON input
> W0605 16:44:59.330170 26786 driver-call.go:140] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/k8s\~nfs/nfs, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/k8s\~nfs/nfs: permission denied, **output: **
After:
>E0605 16:59:45.519906 29041 driver-call.go:212] Failed to unmarshal output for command: init, **output: ""**, error: unexpected end of JSON input
>W0605 16:59:45.520109 29041 driver-call.go:140] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/k8s\~nfs/nfs, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/k8s\~nfs/nfs: permission denied, **output: ""**
Automatic merge from submit-queue (batch tested with PRs 47878, 47503, 47857)
Remove controller node plugin driver dependency for non-attachable fl…
…ex volume drivers (Ex: NFS).
**What this PR does / why we need it**:
Removes requirement to install flex volume drivers on master node for non-attachable drivers likes NFS.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#47109
```release-note
Fixes issue w/Flex volume, introduced in 1.6.0, where drivers without an attacher would fail (node indefinitely waiting for attach). Drivers that don't implement attach should return `attach: false` on `init`.
```
Automatic merge from submit-queue (batch tested with PRs 47851, 47824, 47858, 46099)
Revert 44714 manually
#44714 broke backward compatibility for old swagger spec that kubectl still uses. The decision on #47448 was to revert this change but the change was not automatically revertible. Here I semi-manually remove all references to UnixUserID and UnixGroupID and updated generated files accordingly.
Please wait for tests to pass then review that as there may still be tests that are failing.
Fixes#47448
Adding release note just because the original PR has a release note. If possible, we should remove both release notes as they cancel each other.
**Release note**: (removed by caesarxuchao)
UnixUserID and UnixGroupID is reverted back as int64 to keep backward compatibility.
Automatic merge from submit-queue (batch tested with PRs 34515, 47236, 46694, 47819, 47792)
remove unused constant
**What this PR does / why we need it**:
In flexvolume constant definitions, fix typo StatusFailure string to "Failure", not "Failed" at
b359034817/pkg/volume/flexvolume/flexvolume_util.go (L45)
**Which issue this PR fixes** _(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)_: fixes #
#34510
**Special notes for your reviewer**:
Simple string literal change, but hopefully will prevent future confusion for developers.
Automatic merge from submit-queue (batch tested with PRs 38751, 44282, 46382, 47603, 47606)
Adding 'flexvolume' prefix to FlexVolume plugin names.
**What this PR does / why we need it**: Adds a prefix to FlexVolume plugin names in order to more easily identify plugins as FlexVolume. Improves debugging.
**Special notes for your reviewer**: Unfortunately the delimiter after 'flexvolume' is restricted to either '-' or '.' . This makes the prefix seem like it's part of the vendor name. Not sure if this could cause issues later on.
**Release note**:
```release-note
NONE
```
This implements Bulk volume polling using ideas presented by
justin in https://github.com/kubernetes/kubernetes/pull/39564
But it changes the implementation to use an interface
and doesn't affect other implementations.
Automatic merge from submit-queue
Curating Owners: pkg/volume
cc @jsafrane @spothanis @agonzalezro @justinsb @johscheuer @simonswine @nelcy @pmorie @quofelix @sdminonne @thockin @saad-ali @rootfs
In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.
If You Care About the Process:
------------------------------
We did this by algorithmically figuring out who’s contributed code to
the project and in what directories. Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.
Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).
At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
Also, see https://github.com/kubernetes/contrib/issues/1389.
TLDR:
-----
As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.
3. Notify me if you want some OWNERS file to be removed. Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
Currently kubelet volume management works on the concept of desired
and actual world of states. The volume manager periodically compares the
two worlds and perform volume mount/unmount and/or attach/detach
operations. When kubelet restarts, the cache of those two worlds are
gone. Although desired world can be recovered through apiserver, actual
world can not be recovered which may cause some volumes cannot be cleaned
up if their information is deleted by apiserver. This change adds the
reconstruction of the actual world by reading the pod directories from
disk. The reconstructed volume information is added to both desired
world and actual world if it cannot be found in either world. The rest
logic would be as same as before, desired world populator may clean up
the volume entry if it is no longer in apiserver, and then volume
manager should invoke unmount to clean it up.
This commit adds a new volume manager in kubelet that synchronizes
volume mount/unmount (and attach/detach, if attach/detach controller
is not enabled).
This eliminates the race conditions between the pod creation loop
and the orphaned volumes loops. It also removes the unmount/detach
from the `syncPod()` path so volume clean up never blocks the
`syncPod` loop.