Closed joshuakarp closed 2 years ago
@tegefaulkes can you have a review of this issue in relation to #310 as well given you're touching that work, please write a comment as to how this may be implemented, whether it should go into #310 or a later PR.
Apparently there's a method in NodeGraph
to remove nodes, but it just isn't used yet.
We have to develop a policy for when the nodegraph's bad nodes get garbage collected.
@joshuakarp can you check how other usages of Kademlia deal with this as well? Such as in https://hypercore-protocol.org/protocol/#hyperswarm just search through the src or issues, use sourcegraph, it can help, like how I did it with fsck-objects.c
in working out the GC algorithm for dangling commit objects for VaultInternal
. Once you've done this flesh out the spec above.
This relates to the kademlia spec about expiring old nodes in a bucket. In kademlia, we actually need to ping the old node when we hit the bucket limit instead of just dropping the old node. We should do this by connecting NodeGraph
with NodeManager
.
To do this, we need to move some logic out of NodeGraph
into NodeManager
, such as checking the bucket limit, and performing the ping on the old node.
Additionally when we try to connect to a node id, and this fails for variety of reasons:
Then we should also just delete the record from the NodeGraph
. Not sure if it requires any kind of retry here.
Since this has some deep interplay with DB transactions as well since the NodeGraph
is being mutated by potentially NodeConnectionManager
and NodeManager
, then I'll be working on this.
We can think of this issue as figuring out the NodeId removal policies.
Whereas #344 is about figuring out NodeId adding policies.
Invalid node IDs could happen due to reconnection and the certificate no longer verifies. This should integrate into node ID changes and any natural forgetting/revoking process. We may want to be aggressive or passive with pruning the NG to reduce the amount of Kademlia poisoning or whatever.
As with https://github.com/MatrixAI/js-polykey/issues/150#issuecomment-1154655325, we are leaving this to the natural invalidation process of the node graph. Further p2p simulations are needed before we can see if explicit garbage collection process is needed.
Specification
We need a means of handling any nodes that are found to be invalid.
I make the initial assumption that an "invalid" keynode can only be introduced by a manual insertion into the buckets database. Therefore, an "invalid" node can only be inserted from the
nodes add
CLI call, or from the specified seed nodes when we instantiate a keynode. I'm of the opinion that nodes added from discovery (through Kademlia) should be expected to be correct. However, do note that any nodes added bynodes add
can be discovered through Kademlia. Thus, we're currently at a state where "invalid" nodes could be discovered.From my perspective, there's a couple of ways a node can be deemed as "invalid":
As such, we don't currently have an easy way of determining an "invalid" node.
I believe the best approach to this is making appropriate restrictions on how/what we add to the buckets database. See additional commentary in the below section regarding this.
Additional context
This was originally brought up in review of the nodes CLI refactoring.
When discussing the
nodes add
command, I made the initial assumption that we would need to create a connection to the other node before we could successfully add to the database. This would ensure that the node details must be correct.However, this is not the case https://gitlab.com/MatrixAI/Engineering/Polykey/js-polykey/-/merge_requests/198#note_641488589:
Tasks