mirror of https://github.com/k3s-io/k3s
![]() Add a mutex to guard SetUpAt() and TearDownAt() calls - they should not run in parallel. There is a race in these calls when there are two pods using the same volume, one of them is dying and the other one starting. TearDownAt() checks that a volume is not needed by any pods and detaches the volume. It does so by counting how many times is the volume mounted (GetMountRefs() call below). When SetUpAt() of the starting pod already attached the volume and did not mount it yet, TearDownAt() of the dying pod will detach it - GetMountRefs() does not count with this volume. These two threads run in parallel: dying pod.TearDownAt("myVolume") starting pod.SetUpAt("myVolume") | | | AttachDisk("myVolume") refs, err := mount.GetMountRefs() | Unmount("myDir") | if refs == 1 { | | | Mount("myVolume", "myDir") | | | | DetachDisk("myVolume") | | start containers - OOPS! The volume is detached! | finish the pod cleanup Also, add some logs to cinder plugin for easier debugging in the future, add a test and update the fake mounter to know about bind mounts. |
||
---|---|---|
.. | ||
aws_ebs | ||
cephfs | ||
cinder | ||
downwardapi | ||
empty_dir | ||
fc | ||
flexvolume | ||
flocker | ||
gce_pd | ||
git_repo | ||
glusterfs | ||
host_path | ||
iscsi | ||
nfs | ||
persistent_claim | ||
rbd | ||
secret | ||
util | ||
doc.go | ||
metrics_du.go | ||
metrics_du_test.go | ||
metrics_nil.go | ||
metrics_nil_test.go | ||
plugins.go | ||
plugins_test.go | ||
testing.go | ||
util.go | ||
util_test.go | ||
volume.go | ||
volume_linux.go | ||
volume_unsupported.go |