* If a pod with a GCE PD is created and deleted in rapid succession, it may fail to attach/mount correctly leaving PD data inaccessible (or corrupted in the worst case). (http://issue.k8s.io/11231#issuecomment-122049113)
* Suggested temporary work around: introduce a 1-2 minute delay between deleting and recreating a pod with a PD on the same node.
* Explicit errors while detaching GCE PD could prevent PD from ever being detached (#11321)
* GCE PDs may sometimes fail to attach (#11302)
* If multiple Pods use the same RBD volume in read-write mode, it is possible data on the RBD volume could get corrupted. This problem has been found in environments where both apiserver and etcd rebooted and Pods were redistributed.
* A workaround is to ensure there is no other Ceph client using the RBD volume before mapping RBD image in read-write mode. For example, `rados -p poolname listwatchers image_name.rbd` can list RBD clients that are mapping the image.