Closed hwasp closed 5 years ago
@hwasp
On 1.11 branch, this issue is fixed with https://github.com/kubernetes/kubernetes/pull/70291 On newer kubernetes releases, this is known issue. This is common behavior across other providers - Please refer - https://github.com/kubernetes/kubernetes/pull/70291#issuecomment-461268669
CC: @SandeepPissay
What happened: We tried to power gracefully off a node of our cluster where 10 vsphere volumes were linked to test if the volumes will be properly moved to a different node. The volumes were moved to the new node that got the pods and seems to work well, but the old node still have the volumes attached and can not start because the volumes are now locked by the new node that run the pods.
What you expected to happen:
We expected some control in the way the volumes are linked to avoid locks. The driver doesn't seem to be able to unlink the volumes from the old node.
How to reproduce it (as minimally and precisely as possible):
Power off one node of the cluster with volumes attached. Volumes should be linked to another node. Try to power on the first node. It fails since there are disks it can not access as they are in use (locked) by the new node.
Anything else we need to know?:
Environment:
kubectl version
): v1.11.2uname -a
):/kind bug