Open DipanshuSehjal opened 2 years ago
Taints mostly occur when nodes are in stage of rebooting for example, during node upgrade and reboot.
That is expected, as the taints (at least the drbd.linbit.com/lost-quorum
ones) are added when one node looks unreachable from another. During a reboot this is obviously the case. The question is why the taint is not removed after the node is back online and the satellite + DRBD is running again.
In the above case, its probably related to the outdated pvc-646fa87b-aeb2-4c51-924d-7019d1a5f0b1
resource. Could you please collect kernel logs on all 3 nodes for that resource: journalctl -t kernel --grep pvc-646fa87b-aeb2-4c51-924d-7019d1a5f0b1
.
Lastly, there is one drbd.linbit.com/force-io-error
taint, which would indicate that one of the nodes has the drbd device open, but is currently trying to become secondary. Could you check which node has that taint and see what's up with that resource? The output of drbdsetup status -v
on that node should also show force-io-error:yes
Version - HA controller 1.1.0
Many times we have seen that taints are not removed from nodes so, pods are not scheduled. Moreover, the taints come back on nodes as soon as you remove them manually. Taints mostly occur when nodes are in stage of rebooting for example, during node upgrade and reboot. Additionally, both replicas of 2 resources also went into Outdated state.
Same problem here with v1.1.1, but no other taints except drbd.linbit.com/lost-quorum
on 2 nodes. What's strange is that these nodes both hold replicas of an UpToDate volume (same volume). 🤷
Which DRBD version are you using? And can you check with drbdadm status
, perhaps they are reporting quorum:no
. I think there is a bug in DRBD with the latest releases that could cause this issue.
You're right, drbdadm status
gives a quorum:no
😭
DRBDADM_BUILDTAG=GIT-hash:\ 409097fe02187f83790b88ac3e0d94f3c167adab\ build\ by\ @buildsystem\,\ 2022-09-19\ 12:15:08
DRBDADM_API_VERSION=2
DRBD_KERNEL_VERSION_CODE=0x090201
DRBD_KERNEL_VERSION=9.2.1
DRBDADM_VERSION_CODE=0x091600
DRBDADM_VERSION=9.22.0
Could be related to https://github.com/LINBIT/drbd/issues/52
Ok, so messing around with more or less arbitors (kubectl linstor resource create worker-XYZ pvc-XYZ --drbd-diskless
) allows me to make the quorum value on (when less than 2 diskful + 3 diskless) and off (when at least 2 diskful + 3 diskless).
Ok, so messing around with more or less arbitors (
kubectl linstor resource create worker-XYZ pvc-XYZ --drbd-diskless
) allows me to make the quorum value on (when less than 2 diskful + 3 diskless) and off (when at least 2 diskful + 3 diskless).
The trick isn't stable as the operator will at some point delete extra arbiters. Better solution, while not ideal, is to force quorum on the volume to the number of data nodes.
Version - HA controller 1.1.0
Many times we have seen that taints are not removed from nodes so, pods are not scheduled. Moreover, the taints come back on nodes as soon as you remove them manually. Taints mostly occur when nodes are in stage of rebooting for example, during node upgrade and reboot. Additionally, both replicas of 2 resources also went into Outdated state.
For instance,
Settings defined in storage class as per HA-controller requirements -