Open GeertvanHorrik opened 5 months ago
A cluster reboot (stop + start) solves the issue:
Logs:
AttachVolume.Attach failed for volume "xyz" : CSINode aks-default-12345678-vmss000008 does not contain driver disk.csi.azure.com
Then after a while (when the csi pods are up):
AttachVolume.Attach succeeded for volume "xyz"
But restarting an AKS cluster is tricky (takes a long time, no guarantee that there will be resources available to start it again, etc). What would be a better way to solve this without a full cluster restart?
aks-default-12345678-vmss000002
is not up after node pool auto upgrade, that's the reason why disk attach failed in the beginning, does this issue always happen or mitigated automatically after a while? @GeertvanHorrik
aks-default-12345678-vmss000002 is not up after node pool auto upgrade, that's the reason why disk attach failed in the beginning
We waited for more than 8 hours, but our cluster was down all the time (so that's not really an option). We tried scaling with nodes (adding, removing, manually draining, etc). Only when we stopped / started the cluster, it was all good.
It is happening again. 15 minutes ago the agent pool was updated with the latest image. Now it's not finding the disks.
Pods are being scheduled on aks-default-xxxxxxx-vmss000008.
Then getting this error from the driver:
AttachVolume.Attach failed for volume "xxx-pv" : rpc error: code = NotFound desc = failed to get azure instance id for node "aks-default-xxxxxxx-vmss000008" (instance not found)
Cluster reboot immediately fixes the issue (but there is no guarantee we can start the cluster again after stopping it, so it's not an ideal solution).
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
This is still an issue.
/remove-lifecycle stale
What happened:
Azure AKS applied an auto-update for the node pool. Then the pods are not coming up with this message:
I tried:
PV and PVC are successfully bound after redeploying the app, but the pods cannot attach to the volume.
What you expected to happen:
After an update, the disks should be able to be found and be attachable.
How to reproduce it:
This has happened a few times now. Sometimes we can delete the PVC and PV and redeploy our app, but it's a burden to do this whenever Azure automatically updates the node pool images.
Anything else we need to know?:
Using AKS with managed disks.
Environment:
kubectl version
): 1.28.9