Closed 52tt closed 5 years ago
Already found answers to my questions. This ticket can be closed. Thanks.
Hints:
req_forward_local_dc |
|--> dnode_peer_pool_server --> dnode_peer_for_key_on_rack --> dnode_peer_idx_for_key_on_rack
req_forward_remote_dc |
i think dynomite need explain more description about dynomite's hash algorithm.
some of dynomite peer finder such as dagota(https://github.com/Smile-SA/dagota ) implemented wrong method to create dynomite token.
anyway, thanks to your research. i confused same thing as you, and i found answer at this issue thread :)
Good point above, I have added more information in https://github.com/Netflix/dynomite/wiki/Replication
[ The title should be updated to "what is the hash algorithm for the key", however I didn't find a way to update it. Please directly go to the "Update" section. ]
Hi,
I have a cluster with 1 dc, 2 racks, 4 nodes, running on 2 physical boxes:
rack-2000:
rack-2001:
box0 IP address 172.16.105.213 box1 IP address 172.16.105.212
Here are the yml configuration files:
The problem with my setup is, looks like sharding is not working. Here is what I did:
Step 1) Set key-0001 to key-0019 to 172.16.105.213:2000; Set a-0001 to a-0019 to 172.16.105.213:2000; Set b-0001 to a-0019 to 172.16.105.213:2000; Set c-0001 to a-0019 to 172.16.105.213:2000; Set d-0001 to a-0019 to 172.16.105.213:2000;
Step 2) Get key from 172.16.105.213:2000 shows all keys are correctly set; Get key from 172.16.105.213:2001 shows all keys are correctly set; Get key from 172.16.105.212:2000 shows all keys are correctly set; Get key from 172.16.105.212:2001 shows all keys are correctly set;
Step 3) However, when scan keys from each redis instance, then I found: In rack-2000: All the 95 keys are stored in 172.16.105.213:2300, not sharded into 2 redis; In rack-2001: All the 95 keys are stored in 172.16.105.212:2301, not sharded into 2 redis;
So looks like sharding is not working. Or, maybe all the keys I used happens to fall into same slots? Then, what keys should I use to verify? I also tried other tokens. For example, rack-2000:
BTW, please help to clarify if my understanding of token is correct. I read the generate_yamls.py file to try to understand the token assignment. My understanding is token is used to divide the hash index range. But does that value really matter? For example, in a rack with 2 nodes setup, case a): assigning token 10 to node0, and 20 to node1; or case b): assigning token 100 to node0, and 200 to node1; or case c): assigning token 1000 to node0, and 2000 to node1, I slightly guess there is no difference among a), b) and c), I believe in either case, each node should hold half of the entire hash index space. Is my understanding correct? If not, then what's the maximal value for a token?
Thanks, Yun
Update on 1/25/2019: