mirror of https://github.com/k3s-io/k3s
Merge pull request #48457 from cofyc/rbd_error
Automatic merge from submit-queue (batch tested with PRs 48425, 41680, 48457, 48619, 48635) "rbd: image xxx is locked by other nodes" is misleading **What this PR does / why we need it**: For RWO PV, RBD plugin tries to fence it first, but there are many situations which may cause lock to fail, e.g. - userSecret is incorrect - monitor addresses are incorrect or node cannot access ceph cluster temporarily - image is locked by other nodes - maybe more... So, original "image xxx is locked by other nodes" is incorrect in some cases and misleading in diagnosis. This PR change the error to be correct and not misleading first. We may add detail error descriptions later. **Special notes for your reviewer**: New FailedMount event example if RBD plugin cannot lock image: ``` ... FailedMount MountVolume.SetUp failed for volume "pvc-ee37a9c8-608e-11e7-b3a7-000c291fbe71" : rbd: failed to lock image kubernetes-dynamic-pvc-ee3b9911-608e-11e7-97b6-000c291fbe71 (maybe locked by other nodes), error exit status 22 ``` **Release note**: ```release-note NONE ```pull/6/head
commit
494ffa4650
|
@ -269,7 +269,7 @@ func (util *RBDUtil) AttachDisk(b rbdMounter) error {
|
|||
|
||||
// fence off other mappers
|
||||
if err = util.fencing(b); err != nil {
|
||||
return fmt.Errorf("rbd: image %s is locked by other nodes", b.Image)
|
||||
return fmt.Errorf("rbd: failed to lock image %s (maybe locked by other nodes), error %v", b.Image, err)
|
||||
}
|
||||
// rbd lock remove needs ceph and image config
|
||||
// but kubelet doesn't get them from apiserver during teardown
|
||||
|
|
Loading…
Reference in New Issue