basho / riak_repl

Riak DC Replication
Apache License 2.0
56 stars 32 forks source link

Better handling of fscoordinator communication on notification and clearing of rt_dirty #304

Open jcapricebasho opened 11 years ago

jcapricebasho commented 11 years ago

(All links and line refs are to 1.3.1 tag)

There are two areas in the riak_repl code that could result in the fscoordinator not knowing about a node with an rt_dirty count > 0.

The first is in the rt_dirty function of riak_repl_stats [0]. Once a node is marked as dirty, it is possible that the communication of the event with the fscoordinator fails. Currently, this is being logged at the debug level:

lager:debug("Failed to notify coordinator of rt_dirty status")

However, an event like this logged at warning would allow debugging of the issue at lager's info level which would facilitate instructing clients to manually clear the rt_dirty flag using riak_repl_stats: clear_rt_dirty().


The second is in the notify_rt_dirty_nodes function of riak_repl2_fscoordinator [1]. After a successful fullsync, the multicall on line 723:

rpc:multicall(NodesToNotify, riak_repl_stats, clear_rt_dirty, []),

does not track the results of the multicall. Immediately after the multicall, the list of dirty nodes is cleared:

State#state{dirty_nodes=ordsets:new()};

If there is an error during the multicall, or a node is unreachable for some reason, the rt_dirty flag is not cleared on the remote node and the coordinator no longer considers it dirty.

Tracking these errors and not removing them from the list of dirty nodes will allow an attempt to clear that rt_dirty counter on the next fullsync.


Both of these scenarios create situations where nodes have an rt_dirty count that remains constant through fullsyncs (until the rt_dirty count is incremented again, at which time rt_dirty() will attempt to update the fscoordinator.)

jcapricebasho commented 11 years ago

https://github.com/basho/riak_ee-issues/issues/7

bookshelfdave commented 11 years ago

Suggested by @joecaswell: re-sync dirty status with every fullsync

Before starting a fullsync:

 Owners = riak_core_ring:all_owners(Ring),
           [ case rpc:call(Node, riak_repl_stats, is_rt_dirty,[]) of
                   false -> ok;
                   _ -> riak_repl2_fscoordinator:node_dirty(Node)
               end || Node <- Owners ],
cmeiklejohn commented 10 years ago

Moving to 2.1.

engelsanchez commented 9 years ago

While reviewing the fullsync coordinator code as a group, we noticed this problem with the node dirty notifications. It does seem we can do better than our current approach in case of temporary partitions, and it should definitely be at least a warning in the logs.

engelsanchez commented 9 years ago

389 has relevant changes to warn when propagation of rt_dirty node information fails.