In a test cluster, where kv.raft_log.synchronization.unsafe.disabled was enabled (i.e. process crashes will lose Raft data), we saw a range become unavailable, stalling the workload. CPU was persistently spinning at 100% on the Raft leader (n2) and one follower (n6), with the following CPU profiles:
n2 (leader):
n6 (follower):
Notice the call to raftLog.findConflictByTerm() on the leader. This call only happens when the follower rejected a MsgApp, via MsgAppResp with Reject: true:
The logic here also involves silently swallowing an error when attempting to read the term from storage (although we can't have hit this on the follower, because that would return a 0 term hint to the leader who would then not call findConflictByTerm()):
It seems plausible that data loss induced by kv.raft_log.synchronization.unsafe.disabled somehow ended up with either an append loop or hitting a slow path (e.g. there is a fallback here to probing indexes one by one, although it does not seem like we hit it here), where the leader continually sends MsgApps to the follower, who in turn rejects them.
We should make sure the behavior here is sound, and improve observability when this happens.
In a test cluster, where
kv.raft_log.synchronization.unsafe.disabled
was enabled (i.e. process crashes will lose Raft data), we saw a range become unavailable, stalling the workload. CPU was persistently spinning at 100% on the Raft leader (n2) and one follower (n6), with the following CPU profiles:n2 (leader):
n6 (follower):
Notice the call to
raftLog.findConflictByTerm()
on the leader. This call only happens when the follower rejected aMsgApp
, viaMsgAppResp
withReject: true
:https://github.com/etcd-io/raft/blob/ee0fe9da492888b55fe183cf1a42931ad551ec6b/raft.go#L1339-L1459
This happens when the follower fails to append a set of log entries, e.g. because the follower is lacking a prefix of the log:
https://github.com/etcd-io/raft/blob/ee0fe9da492888b55fe183cf1a42931ad551ec6b/raft.go#L1738-L1770
The logic here also involves silently swallowing an error when attempting to read the term from storage (although we can't have hit this on the follower, because that would return a 0 term hint to the leader who would then not call
findConflictByTerm()
):https://github.com/etcd-io/raft/blob/1df762940b8c309a27cfafb086d767c0c7e3f58f/log.go#L180-L187
It seems plausible that data loss induced by
kv.raft_log.synchronization.unsafe.disabled
somehow ended up with either an append loop or hitting a slow path (e.g. there is a fallback here to probing indexes one by one, although it does not seem like we hit it here), where the leader continually sends MsgApps to the follower, who in turn rejects them.We should make sure the behavior here is sound, and improve observability when this happens.
Jira issue: CRDB-32732
Epic CRDB-39898