darach / dq

Distributed Fault Tolerant Queue library
MIT License
35 stars 2 forks source link

all nodes are required to run lock #2

Open Licenser opened 9 years ago

Licenser commented 9 years ago

When a node is connected but does not have locks running, it is not possible to add elements to a queue:

1) start two nodes, bill and jill 2) leave jill be 3) on bil run:

[root@dq1 ~/dq]# ERL_LIBS=_build/lib/ erl -pa ebin -name bill@192.168.221.222 -setcookie ted
Erlang R16B02 (erts-5.10.3) [source] [64-bit] [smp:16:16] [async-threads:10] [kernel-poll:false] [dtrace]

Eshell V5.10.3  (abort with ^G)
(bill@192.168.221.222)1> net_adm:ping('jill@192.168.221.227').
pong
(bill@192.168.221.222)2> application:ensure_all_started(locks).
{ok,[locks]}
(bill@192.168.221.222)3> {ok,Q} = dq:new(myq).
{ok,<0.52.0>}
(bill@192.168.221.222)4> [ dq:in(X,Q) || X <- lists:seq(1,10) ].
** exception error: {{timeout,
                         {gen_server,call,
                             [<0.52.0>,
                              {'$locks_leader_call',{write,#Fun<dq.3.64015509>}},
                              2000]}},
                     {locks_leader,leader_call,
                         [<0.52.0>,{write,#Fun<dq.3.64015509>}]}}
     in function  locks_leader:leader_call/3 (/root/dq/_build/lib/locks/src/locks_leader.erl, line 376)
(bill@192.168.221.222)5> [ dq:in(X,Q) || X <- lists:seq(1,10) ].
** exception error: {{noproc,
                         {gen_server,call,
                             [<0.52.0>,
                              {'$locks_leader_call',{write,#Fun<dq.3.64015509>}},
                              2000]}},
                     {locks_leader,leader_call,
                         [<0.52.0>,{write,#Fun<dq.3.64015509>}]}}
     in function  locks_leader:leader_call/3 (/root/dq/_build/lib/locks/src/locks_leader.erl, line 376)
(bill@192.168.221.222)6>
darach commented 9 years ago

This is a feature in so far as it is consistent with default behaviour in locks_server:

ERL_LIBS=deps erl -pa ebin -sname jill@darach -setcookie ted
Erlang R16B02 (erts-5.10.3) [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false] [dtrace]

Eshell V5.10.3  (abort with ^G)
(jill@darach)1> application:ensure_all_started(locks).
{ok,[locks]}
(jill@darach)2> {ok,Q} = dq:new(myq).
{ok,<0.46.0>}
(jill@darach)3> locks_leader:info(Q).
[{leader,<0.46.0>},
 {leader_node,jill@darach},
 {candidates,[]},
 {new_candidates,[]},
 {workers,[]},
 {module,dq_callback},
 {mod_state,{state,true,{[],[]}}}]
(jill@darach)4> nodes().
[]
(jill@darach)5> net_adm:ping(bill@darach).
pong
(jill@darach)6> locks_leader:info(Q).
** exception exit: {timeout,{gen_server,call,[<0.46.0>,'$locks_leader_info']}}
     in function  gen_server:call/2 (gen_server.erl, line 180)
     in call from locks_leader:info/1 (src/locks_leader.erl, line 397)
(jill@darach)7>

The first call to locks_leader:info/1 succeeds as we have a cluster of 1. The second, with the bill node() started/connected results in a failure. Starting locks and the call succeeds as expected, even if there is no Q agent on the other node(s).

Although locks needs to be started on all nodes. Not all nodes need to run the Q.