Open Ulrar opened 10 months ago
Have you checked with drbdsetup status
on the remaining nodes that they indeed have quorum? If they do have it, it seems like a bug in the HA controller.
Yes, they do lose quorum. For example just now :
pvc-e57930e5-6772-41e4-8c98-99105b77970a role:Secondary suspended:quorum
disk:UpToDate quorum:no blocked:upper
talos-00r-fu9 role:Secondary
peer-disk:Diskless
talos-813-fn2 connection:Connecting
It has an UpToDate and a Diskless node, and yet it thinks it lost quorum. That's the only volume that lost quorum, the other ones look the same but with quorum, and the local node became Primary, maybe it's something to do with that specific volume somehow
Very weird. Probably something for the DRBD folks to look at.
If you just want to disable the taints, you can disable the HA Controller since 2.3.0: https://github.com/piraeusdatastore/piraeus-operator/blob/v2/docs/reference/linstorcluster.md#spechighavailabilitycontroller
It looks like the TieBreaker / Diskless node doesn't count towards the quorum when changing primary, so if the Primary for a volume goes down (even cleanly, it appears) the other one can't become primary anymore, and goes into a lost quorum state.
That is probably a drbd issue, but when the primary goes down cleanly I wonder if the operator could make sure the secondary switches first, while it has quorum ? Or maybe I should just go to a placement count of 3 to avoid this
Nothing fancy, I'm using 3 Talos nodes, with scheduling on control plane nodes (since there's only 3 nodes) and a replica 3.
But this actually seems to have fixed itself, I suspect DRBD 9.2.9 is what did it. Or at least I used to run into this all the time, and since that upgrade I haven't seen it once, so I think this was it :
- Fix a kernel crash that is sometimes triggered when downing drbd
resources in a specific, unusual order (was triggered by the
Kubernetes CSI driver)
Check if the right DRBD version is in use: cat /proc/drbd
, should report > 9.0.0.
I'm not sure what exact steps you run when you cordon, could you please elaborate a bit on that?
Hi,
I have 3 nodes, and a placementCount of 2. After quite a bit of fiddling, the third node got 'TieBreaker' volumes (or Diskless, for some) setup on it, so I'd assume I'm okay to lose one node.
But sadly as soon as any of the nodes go down, I lose quorum and the remaining two nodes get tainted with
drbd.linbit.com/lost-quorum:NoSchedule
.I have no idea why the above leads to loosing quorum, there's clearly two connected nodes (even if one is the TieBreaker).
I'm not sure what I'm doing wrong, but tainting the nodes like that make recovering pretty difficult as most pods won't get re-scheduled, depending on what went down I sometimes have to manually untaint a node to let pods come back up and slowly recover by hand, using drbdadm to decide which to keep for every volume.
Thanks