Open embik opened 1 year ago
I will work on the issue upstream https://github.com/kubevirt/csi-driver/issues/83, that should not block the KKP 2.22 release.
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
/remove-priority high
/milestone clear
What happened?
While testing #11736, I created a PVC to make sure that evicting a
virt-launcher
pod would allow me to reschedule workloads with storage within the KubeVirt user cluster.However, I noticed that a Pod trying to mount a volume that was attached to a node evicted on the KubeVirt infra side (the node-eviction-controller drains and deletes the VM and
Node
object) is stuck with:I looked for
volumeattachment
resources and found this one:This references a node no longer existing. Looking at the volume attachment in detail, it has a deletion timestamp and this is the status of it:
Expected behavior
The volume can be re-mounted on another node since the initial pod and node both got terminated.
How to reproduce the issue?
virt-launcher
Pod that is hosting the Node that ourapp
Pod got scheduled to.How is your environment configured?
Provide your KKP manifest here (if applicable)
What cloud provider are you running on?
KubeVirt
What operating system are you running in your user cluster?
Ubuntu 22.04
Additional information