Open JumboJa opened 4 weeks ago
ALSO, ON THE OTHER SIDE, I always get
2024-11-01 20:09:59,344 - kademlia.protocol - INFO - got successful response from 192.168.1.157:50003 2024-11-01 20:09:59,344 - kademlia.protocol - INFO - never seen 192.168.1.157:50003 before, adding to router 2024-11-01 20:09:59,345 - kademlia.protocol - INFO - got successful response from 192.168.1.157:50005 2024-11-01 20:09:59,345 - kademlia.protocol - INFO - never seen 192.168.1.157:50005 before, adding to router 2024-11-01 20:09:59,346 - kademlia.protocol - INFO - got successful response from 192.168.1.157:50003 2024-11-01 20:09:59,347 - kademlia.protocol - INFO - got successful response from 192.168.1.157:50005
But those IPs already bootstrapped and are connected to the network, they MUST already be present in the RT, right? Why the're "never seen"? Do nodes maintain their RTs correctly?
Hello! Would be great if someone could help with this...
I started 7 nodes (demonized python script processes) on two machines. So each process' IP+PORT is unique (so nodes IDs as well). Keys save/get works and is done by set/get python scripts, that setup their own listen port, so those set/get scripts also look like a nodes to the others. Consequently, they are added to the routing tables.
When set/get scripts finish, routing tables of other nodes still have records of that "switched off" scripts(nodes). AND DESPITE log shows REMOVE like (say when I SET a key):
seems like despite they are removed (are they?), then when I run GET (or any other script) I always see
removing from router
again and again.So each time, especially when I SET a key, nodes re-check those dead script/nodes for 4-5 secs each, which takes more than a minute to set a value in total. And this is just for 7 nodes! I don't even want imagine, what's going to happen if there are, say, 500 dead nodes in the routing table...
(Must be mentioned, that nodes are in the same 1G subnet, no lags, firewalls etc etc between them.)