mirror of https://github.com/k3s-io/k3s
![]() Automatic merge from submit-queue Allow attach of volumes to multiple nodes for vSphere This is a fix for issue #50944 which doesn't allow a volume to be attached to a new node after the node is powered off where the volume was previously attached. Current behaviour: One of the cluster worker nodes was powered off in vCenter. Pods running on this node have been rescheduled on different nodes but got stuck in ContainerCreating. It failed to attach the volume on the new node with error "Multi-Attach error for volume pvc-xxx, Volume is already exclusively attached to one node and can't be attached to another" and hence the application running in the pod has no data available because the volume is not attached to the new node. Since the volume is still attached to powered off node, any attempt to attach the volume on the new node failed with error "Multi-Attach error". It's stuck for 6 minutes until attach/detach controller forcefully tried to detach the volume on the powered off node. After the end of 6 minutes when volume is detached on powered off node, the volume is now successfully attached on the new node and application has now the data available. What is expected to happen: I would want the attach/detach controller to go ahead with the attach of the volume on new node where the pod got provisioned instead of waiting for the volume to be detached on the powered off node. It is ok to eventually delete the volume on the powered off node after 6 minutes. This way the application downtime is low and pods are up as soon as possible. The current fix ignore, vSphere volumes/persistent volume to check for multi-attach scenario in attach/detach controller. @jingxu97 @saad-ali : Can you please take a look at it. @tusharnt @divyenpatel @rohitjogvmw @luomiao ```release-note Allow attach of volumes to multiple nodes for vSphere ``` |
||
---|---|---|
.. | ||
attachdetach | ||
events | ||
persistentvolume | ||
OWNERS |