Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Add volume spec to mountedPod in actual state of world
Add volume spec into mountedPod data struct in the actual state of the
world.
Fixes issue #61248
Users must not be allowed to step outside the volume with subPath.
Therefore the final subPath directory must be "locked" somehow
and checked if it's inside volume.
On Windows, we lock the directories. On Linux, we bind-mount the final
subPath into /var/lib/kubelet/pods/<uid>/volume-subpaths/<container name>/<subPathName>,
it can't be changed to symlink user once it's bind-mounted.
If plugin is non-attachable, global unmap path isn't stored in asw
then plugin fails to unmap volume. To store the path, this PR moves
MarkDeviceAsMounted operation from the `if volumeAttacher != nil` block.
Fixes#60025
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Redesign and implement volume reconstruction work
This PR is the first part of redesign of volume reconstruction work. The detailed design information is https://github.com/kubernetes/community/pull/1601
The changes include
1. Remove dependency on volume spec stored in actual state for volume
cleanup process (UnmountVolume and UnmountDevice)
Modify AttachedVolume struct to add DeviceMountPath so that volume
unmount operation can use this information instead of constructing from
volume spec
2. Modify reconciler's volume reconstruction process (syncState). Currently workflow
is when kubelet restarts, syncState() is only called once before
reconciler starts its loop.
a. If volume plugin supports reconstruction, it will use the
reconstructed volume spec information to update actual state as before.
b. If volume plugin cannot support reconstruction, it will use the
scanned mount path information to clean up the mounts.
In this PR, all the plugins still support reconstruction (except
glusterfs), so reconstruction of some plugins will still have issues.
The next PR will modify those plugins that cannot support reconstruction
well.
This PR addresses issue #52683
Automatic merge from submit-queue (batch tested with PRs 57824, 58806, 59410, 59280). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
2nd try at using a vanity GCR name
The 2nd commit here is the changes relative to the reverted PR. Please focus review attention on that.
This is the 2nd attempt. The previous try (#57573) was reverted while we
figured out the regional mirrors (oops).
New plan: k8s.gcr.io is a read-only facade that auto-detects your source
region (us, eu, or asia for now) and pulls from the closest. To publish
an image, push k8s-staging.gcr.io and it will be synced to the regionals
automatically (similar to today). For now the staging is an alias to
gcr.io/google_containers (the legacy URL).
When we move off of google-owned projects (working on it), then we just
do a one-time sync, and change the google-internal config, and nobody
outside should notice.
We can, in parallel, change the auto-sync into a manual sync - send a PR
to "promote" something from staging, and a bot activates it. Nice and
visible, easy to keep track of.
xref https://github.com/kubernetes/release/issues/281
TL;DR:
* The new `staging-k8s.gcr.io` is where we push images. It is literally an alias to `gcr.io/google_containers` (the existing repo) and is hosted in the US.
* The contents of `staging-k8s.gcr.io` are automatically synced to `{asia,eu,us)-k8s.gcr.io`.
* The new `k8s.gcr.io` will be a read-only alias to whichever regional repo is closest to you.
* In the future, images will be promoted from `staging` to regional "prod" more explicitly and auditably.
```release-note
Use "k8s.gcr.io" for pulling container images rather than "gcr.io/google_containers". Images are already synced, so this should not impact anyone materially.
Documentation and tools should all convert to the new name. Users should take note of this in case they see this new name in the system.
```
This is the 2nd attempt. The previous was reverted while we figured out
the regional mirrors (oops).
New plan: k8s.gcr.io is a read-only facade that auto-detects your source
region (us, eu, or asia for now) and pulls from the closest. To publish
an image, push k8s-staging.gcr.io and it will be synced to the regionals
automatically (similar to today). For now the staging is an alias to
gcr.io/google_containers (the legacy URL).
When we move off of google-owned projects (working on it), then we just
do a one-time sync, and change the google-internal config, and nobody
outside should notice.
We can, in parallel, change the auto-sync into a manual sync - send a PR
to "promote" something from staging, and a bot activates it. Nice and
visible, easy to keep track of.
Automatic merge from submit-queue (batch tested with PRs 52942, 58415). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Improve messaging on volume expansion
- we now provide clear message to user what to do when cloudprovider resizing is finished
and file system resizing is needed.
- add a event when resizing is successful
- Use PATCH both in controller-manager and kubelet for updating PVC status
- Remove code duplication between controller-manager and kubelet for updating PVC status
- Only remove conditions that are managed by resize controller
```release-note
Improve messages user gets during and after volume resizing is done.
```
This PR is the first part of redesign of volume reconstruction work. The
changes include
1. Remove dependency on volume spec stored in actual state for volume
cleanup process (UnmountVolume and UnmountDevice)
Modify AttachedVolume struct to add DeviceMountPath so that volume
unmount operation can use this information instead of constructing from
volume spec
2. Modify reconciler's volume reconstruction process (syncState). Currently workflow
is when kubelet restarts, syncState() is only called once before
reconciler starts its loop.
a. If volume plugin supports reconstruction, it will use the
reconstructed volume spec information to update actual state as before.
b. If volume plugin cannot support reconstruction, it will use the
scanned mount path information to clean up the mounts.
In this PR, all the plugins still support reconstruction (except
glusterfs), so reconstruction of some plugins will still have issues.
The next PR will modify those plugins that cannot support reconstruction
well.
This PR addresses issue #52683, #54108 (This PR includes the changes to
update devicePath after local attach finishes)
- we now provide clear message to user what to do when cloudprovider resizing is finished
and file system resizing is needed.
- add a event when resizing is successful.
- Use Patch for updating PVCs in both kubelet and controller-manager
- Extract updating pvc util function in one place.
- Only update resize conditions on progress
Automatic merge from submit-queue (batch tested with PRs 57702, 57128). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
format error message and remove duplicated event for resize volume failure
**What this PR does / why we need it**:
1. The `operationGenerator.resizeFileSystem` method returns errors generated by `volumeToMount.GenerateErrorDetailed`, and the outside code(`operationGenerator.GenerateMountVolumeFunc`) uses `volumeToMount.GenerateError` to generate a new error again, which lead to the event message redundant and confused, we should use `volumeToMount.GenerateError` inside `operationGenerator.resizeFileSystem` only, in outside code is not necessary.
2. The `eventRecorderFunc` will record an event if `resizeFileSystem` returns an error, so we needn't to record event inside `resizeFileSystem` itself.
**Release note**:
```release-note
NONE
```
/sig storage
/kind enhancement
Automatic merge from submit-queue (batch tested with PRs 55475, 57155, 57260, 57222). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Improved mount/attach error logging and added attach event.
Fixed kubelet error message to be more descriptive. Added Attach success event for help in debugging.
The attach event is helpful when the node may not have the correct information about attachment status, it allows the user to see whether the Attach was run at all. If there is no success/failure attach message we can infer that there was no attach started at all.
Fixes#57217
Automatic merge from submit-queue
Add volume operation metrics to operation executor and PV controller
This PR implements the proposal for high level volume metrics https://github.com/kubernetes/community/pull/809
**Special notes for your reviewer**:
~Differences from proposal:~ all resolved
~"verify_volume" is now "verify_volumes_are_attached" + "verify_volumes_are_attached_per_node" + "verify_controller_attached_volume." Which of them do we want?~
~There is no "mount_device" metric because the MountVolume operation combines MountDevice and mount (plugin.Setup). Do we want to extract the mount_device metric or is it okay to keep mountvolume as one? For attachable volumes, MountDevice is the actual mount and Setup is a bindmount + setvolumeownership. For unattachable, mountDevice does not occur and Setup is an actual mount + setvolumeownership.~
~PV controller metrics I did not implement following the proposal at all. I did not change goroutinemap nor scheduleOperation. Because provisionClaimOperation does not return an error, so it's impossible for the caller to know if there is actually a failure worth reporting. So I manually create a new metric inside the function according to some conditions.~
@gnufied
I have tested the operationexecutor metrics but not provision & delete. Sample:
![screen shot 2017-08-02 at 15 01 08](https://user-images.githubusercontent.com/13111288/28889980-a7093526-7793-11e7-9aa9-ad7158be76fa.png)
**Release note**:
```release-note
Add error count and time-taken metrics for storage operations such as mount and attach, per-volume-plugin.
```
This change is prerequisite for implementing iSCSI attacher
and detacher.
In order to use chap authentication at iSCSI plugin after
implementing attacher and detacher, secret is needed at
AttachDisk() which is called from WaitForAttach().
To obtain secret, pod information is required, but
WaitForAttach() doesn't pass pod information inside.
This patch adds 'pod' as an argument of WaitForAttach()
and adds changes to drivers who implements WaitForAttach().
Fixes#48953
Automatic merge from submit-queue (batch tested with PRs 46076, 43879, 44897, 46556, 46654)
Local storage plugin
**What this PR does / why we need it**:
Volume plugin implementation for local persistent volumes. Scheduler predicate will direct already-bound PVCs to the node that the local PV is at. PVC binding still happens independently.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
Part of #43640
**Release note**:
```
Alpha feature: Local volume plugin allows local directories to be created and consumed as a Persistent Volume. These volumes have node affinity and pods will only be scheduled to the node that the volume is at.
```
Automatic merge from submit-queue (batch tested with PRs 46450, 46272, 46453, 46019, 46367)
Move MountVolume.SetUp succeeded to debug level
This message is verbose and repeated over and over again in log files
creating a lot of noise. Leave the message in, but require a -v in
order to actually log it.
**What this PR does / why we need it**: Moves a verbose log message to actually be verbose.
**Which issue this PR fixes** fixes#46364Fixes#29059
Automatic merge from submit-queue (batch tested with PRs 46383, 45645, 45923, 44884, 46294)
Node status updater now deletes the node entry in attach updates...
… when node is missing in NodeInformer cache.
- Added RemoveNodeFromAttachUpdates as part of node status updater operations.
**What this PR does / why we need it**: Fixes issue of unnecessary node status updates when node is deleted.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#42438
**Special notes for your reviewer**: Unit tested added, but a more comprehensive test involving the attach detach controller requires certain testing functionality that is currently absent, and will require larger effort. Will be added at a later time.
There is an edge case caused by the following steps:
1) A node is deleted and restarted. The node exists, but is not yet recognized by Kubernetes.
2) A pod requiring a volume attach with nodeName specifically set to this node.
This would make the pod stuck in ContainerCreating state. This is low-pri since it's a specific edge case that can be avoided.
**Release note**:
```release-note
NONE
```
This message is verbose and repeated over and over again in log files
creating a lot of noise. Leave the messsage in, but require a -v in
order to actually log it.
Fixes#29059
When volume's status is 'attaching', its attachments will be None,
controllermanager can't get device path and make some failed event.
But it is normal, let's fix it.
Automatic merge from submit-queue (batch tested with PRs 44590, 44969, 45325, 45208, 44714)
Use dedicated UnixUserID and UnixGroupID types
**What this PR does / why we need it**:
DRYs up type definitions by using the dedicated types in apimachinery
**Which issue this PR fixes**
#38120
**Release note**:
```release-note
UIDs and GIDs now use apimachinery types
```
When the attach/detach controller crashes and a pod with attached PV is deleted
afterwards the controller will never detach the pod's attached volumes. To
prevent this the controller should try to recover the state from the nodes
status.
Automatic merge from submit-queue
Remove unused method from operation_generator
This is only a removal of the GerifyVolumeIsSafeToDetach [sic] method from operation_executor. The method is not called from anywhere, moreover there is a private method named verifyVolumeIsSafeToDetach (which is being used). This looks like a cut&paste mistake that deserves to be cleaned.
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 42522, 42545, 42556, 42006, 42631)
Fixes MountVolume.NewMounter errors not displayed to users via describe events
Fixes#42004
This fixes the problem of mount errors being eaten and not displayed to users again. Specifically erros caught in MountVolume.NewMounter (like missing endpoints, etc...)
Current behavior for any mount failure:
```
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
12m 12m 1 default-scheduler Normal Scheduled Successfully assigned glusterfs-bb-pod1 to 127.0.0.1
10m 1m 5 kubelet, 127.0.0.1 Warning FailedMount Unable to mount volumes for pod "glusterfs-bb-pod1_default(67c9dfa7-f9f5-11e6-aee2-5254003a59cf)": timeout expired waiting for volumes to attach/mount for pod "default"/"glusterfs-bb-pod1". list of unattached/unmounted volumes=[glusterfsvol]
10m 1m 5 kubelet, 127.0.0.1 Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "default"/"glusterfs-bb-pod1". list of unattached/unmounted volumes=[glusterfsvol]
```
New Behavior:
For example on glusterfs - deliberately didn't create endpoints, now correct message is displayed:
```
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
2m 2m 1 default-scheduler Normal Scheduled Successfully assigned glusterfs-bb-pod1 to 127.0.0.1
54s 54s 1 kubelet, 127.0.0.1 Warning FailedMount Unable to mount volumes for pod "glusterfs-bb-pod1_default(8edd2c25-fa09-11e6-92ae-5254003a59cf)": timeout expired waiting for volumes to attach/mount for pod "default"/"glusterfs-bb-pod1". With error timed out waiting for the condition. list of unattached/unmounted volumes=[glusterfsvol]
54s 54s 1 kubelet, 127.0.0.1 Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "default"/"glusterfs-bb-pod1". With error timed out waiting for the condition. list of unattached/unmounted volumes=[glusterfsvol]
2m 6s 814 kubelet, 127.0.0.1 Warning FailedMount MountVolume.NewMounter failed for volume "kubernetes.io/glusterfs/8edd2c25-fa09-11e6-92ae-5254003a59cf-glusterfsvol" (spec.Name: "glusterfsvol") pod "8edd2c25-fa09-11e6-92ae-5254003a59cf" (UID: "8edd2c25-fa09-11e6-92ae-5254003a59cf") with: endpoints "glusterfs-cluster" not found
```
This implements Bulk volume polling using ideas presented by
justin in https://github.com/kubernetes/kubernetes/pull/39564
But it changes the implementation to use an interface
and doesn't affect other implementations.
To safely mark a volume detached when the volume controller manager is used.
An example of one such problem:
1. pod is created, volume is added to desired state of the world
2. reconciler process starts
3. reconciler starts MountVolume, which is kicked off asynchronously via
operation_executor.go
4. MountVolume mounts the volume, but hasn't yet marked it as mounted
5. pod is deleted, volume is removed from desired state of the world
6. reconciler detects volume is no longer in desired state of world,
removes it from volumes in use
7. MountVolume tries to mark volume in use, throws an error because
volume is no longer in actual state of world list.
8. controller-manager tries to detach the volume, this fails because it
is still mounted to the OS.
9. EBS gets stuck indefinitely in busy state trying to detach.
Automatic merge from submit-queue
Curating Owners: pkg/volume
cc @jsafrane @spothanis @agonzalezro @justinsb @johscheuer @simonswine @nelcy @pmorie @quofelix @sdminonne @thockin @saad-ali @rootfs
In an effort to expand the existing pool of reviewers and establish a
two-tiered review process (first someone lgtms and then someone
experienced in the project approves), we are adding new reviewers to
existing owners files.
If You Care About the Process:
------------------------------
We did this by algorithmically figuring out who’s contributed code to
the project and in what directories. Unfortunately, that doesn’t work
well: people that have made mechanical code changes (e.g change the
copyright header across all directories) end up as reviewers in lots of
places.
Instead of using pure commit data, we generated an excessively large
list of reviewers and pruned based on all time commit data, recent
commit data and review data (number of PRs commented on).
At this point we have a decent list of reviewers, but it needs one last
pass for fine tuning.
Also, see https://github.com/kubernetes/contrib/issues/1389.
TLDR:
-----
As an owner of a sig/directory and a leader of the project, here’s what
we need from you:
1. Use PR https://github.com/kubernetes/kubernetes/pull/35715 as an example.
2. The pull-request is made editable, please edit the `OWNERS` file to
remove the names of people that shouldn't be reviewing code in the
future in the **reviewers** section. You probably do NOT need to modify
the **approvers** section. Names asre sorted by relevance, using some
secret statistics.
3. Notify me if you want some OWNERS file to be removed. Being an
approver or reviewer of a parent directory makes you a reviewer/approver
of the subdirectories too, so not all OWNERS files may be necessary.
4. Please use ALIAS if you want to use the same list of people over and
over again (don't hesitate to ask me for help, or use the pull-request
above as an example)
Automatic merge from submit-queue
Add unit tests for operation_executor
Add unit test for `Unmount operations should start in parallel for all volume plugins`
cc: @saad-ali
This is a fix on top #38124. In this fix, we move the logic to filter
out shared mount references into operation_executor's UnmountDevice
function to avoid this part is being used by other types volumes such as
rdb, azure etc. This filter function should be only needed during
unmount device for GCI image.
Automatic merge from submit-queue
Reduce verbosity of volume reconciler
**What this PR does / why we need it**:
It reduces the log verbosity for attaching of volumes
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Reduce verbosity of volume reconciler when attaching volumes
```
Set logging level for information about attaching of volumes to from 1 to 4
Otherwise the log is spammed with one line per 100ms while attaching is
in progress and afterwards as long as the volume is attached.
Automatic merge from submit-queue
refactor DeviceOpened() so it won't return error if device doesn't exist
<!-- Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, read our contributor guidelines https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md and developer guide https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md
2. If you want *faster* PR reviews, read how: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/faster_reviews.md
3. Follow the instructions for writing a release note: https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes
-->
**What this PR does / why we need it**:
DeviceOpened() is called after device is unmounted but before detached. Some volumes such as rbd don't support 3rd party detach, they have to be detached during unmount. Once detached, the device path vanishes. This causes false alarm when DeviceOpened() is called.
The fix is to ignore error IsNotExist
**Which issue this PR fixes** _(optional, in `fixes #<issue number>(, #<issue_number>, ...)` format, will close that issue when PR gets merged)_: fixes #
**Special notes for your reviewer**:
@kubernetes/sig-storage
**Release note**:
<!-- Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
-->
``` release-note
```
Signed-off-by: Huamin Chen hchen@redhat.com
At master volume reconciler, the information about which volumes are
attached to nodes is cached in actual state of world. However, this
information might be out of date in case that node is terminated (volume
is detached automatically). In this situation, reconciler assume volume
is still attached and will not issue attach operation when node comes
back. Pods created on those nodes will fail to mount.
This PR adds the logic to periodically sync up the truth for attached volumes kept in the actual state cache. If the volume is no longer attached to the node, the actual state will be updated to reflect the truth. In turn, reconciler will take actions if needed.
To avoid issuing many concurrent operations on cloud provider, this PR
tries to add batch operation to check whether a list of volumes are
attached to the node instead of one request per volume.
More details are explained in PR #33760