Closed glycerine closed 7 years ago
As I understand, this is normal behavior with Raft. The cluster will continually look for the node until the node is explicitly removed. This is the boilerplate functionality from the hashicorp/raft library.
Summit has the RAFTREMOVEPEER
command that will force remove the node.
RAFTREMOVEPEER :7483
The log should quiet down shortly after.
Thanks Josh. That clarifies the situation.
checking on the raft fault tolerance functionality, at 56ec0609e35bc528c2789854038cbdb675e62e97
It seems fine to complain a couple of times. But once the new leader gets the same server count back, it should certainly be quiet about loosing an old node.