Closed GoogleCodeExporter closed 8 years ago
In rt_chord, symmetric replication spreads keys in quarters around the ring.
For a (logical) node to be responsible for multiple replicas of the same K/V
pair it has to be responsible for at least 1/4 of the ring, i.e. except for the
first node, all other nodes must have keys in less than 3/4 of the ring.
Assuming uniform distribution of the keys (see randoms:getRandomId/0) and 10
additional nodes there is already a probability of less than (3/4)^10 =~ 0.056
that this might happen. The more nodes, the smaller the probability. Of course,
this is different if you use multiple logical nodes on a single physical node...
Original comment by nico.kru...@googlemail.com
on 4 Aug 2010 at 1:45
> and 10 additional nodes there is already a probability of less than (3/4)^10
=~ 0.056
The 5,6% is too much as for me. You are talking about probability but I'm
talking about assurance. Probability of failure of physical node in a shot-term
is much less than 5% but we anyway need to build the fault-tolerance cluster
which gives us _assurance_.
>Of course, this is different if you use multiple logical nodes on a single
physical node...
The administration API of Scalaris gives us the staff ability to run within a
single Erlang VM a few DHT-nodes (admin:add_nodes function). I think that this
is done to increase the productivity of the base, is not it? But as can be seen
use in the production of this possibility can not be.
Original comment by serge.po...@gmail.com
on 4 Aug 2010 at 2:35
[deleted comment]
To run multiple logical nodes on a single physical node is not recommended for
production systems. We have this feature only to be able to execute
large systems more easily for testing.
To steer the replica distribution one can define different key-prefixes for
nodes in individual configuration files (see {key_creator,
random_with_bit_mask} in scalaris.cfg which you can override in
scalaris.local.cfg).
% key_creation algorithm
{key_creator, random}.
%{key_creator, random_with_bit_mask}.
% (randomkey band mask2) bor mask1
%{key_creator_bitmask, {16#00000000000000000000000000000000,
16#3FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF}}.
%{key_creator_bitmask, {16#40000000000000000000000000000000,
16#3FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF}}.
%{key_creator_bitmask, {16#80000000000000000000000000000000,
16#3FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF}}.
%{key_creator_bitmask, {16#C0000000000000000000000000000000,
16#3FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF}}.
One could thereby place four nodes explicitly and use quarter bit-masks as
shown in the example for additional nodes.
Original comment by schin...@gmail.com
on 4 Aug 2010 at 2:52
Ok. So is a solution.
But conditions for it are:
1. We need "base" nodes which number must be equal to replicas number.
2. Positions of these nodes on the keyring should be explicitly determinate.
3. After a crash repair these nodes must take one of the free "base" position.
Am I right?
Original comment by serge.po...@gmail.com
on 4 Aug 2010 at 4:34
Yes, you are right.
Original comment by schin...@gmail.com
on 4 Aug 2010 at 5:22
Ok, thanks!
May I ask you to add this conditions to the FAQ or/and to the User Manual?
Original comment by serge.po...@gmail.com
on 4 Aug 2010 at 5:27
With our new passive load balancing (as of r1313), nodes are evenly distributed
across the ring, so explicit placing is no longer necessary.
Original comment by schin...@gmail.com
on 12 Jan 2011 at 6:51
Original issue reported on code.google.com by
serge.po...@gmail.com
on 4 Aug 2010 at 1:21