Closed morph-dev closed 5 months ago
I am going to suggest that we "sleep" this instead of merging it, and wait until we can actually measure how often this happens so that we can determine the overall improvement that this optimization would give. It seems that it is possible that the common case in state network might be that this only happens a very small amount of the time and that the added efficiency isn't justified or at least is a very small savings.
... the node that has the optimization implemented is likely to have more work versus the node that does not implement it.
I'm not sure I agree on this part. You have to pass over all trie nodes while you verify the proof. It should be possible to generate all content keys / ids while doing so, so it shouldn't be much overhead.
But I agree that most nodes would most likely be relatively small and they would very rarely benefit from this, and that in most cases it's not worth doing it. The only nodes that would benefit from this are:
I'm not sure I agree on this part. You have to pass over all trie nodes while you verify the proof. It should be possible to generate all content keys / ids while doing so, so it shouldn't be much overhead.
The hashing is probably to most expensive and not so much passing over the nodes, but sure, it is not going to be a big overhead.
I think the second overhead I mentioned is more important. It's more theoretical but I think that one that might occur on a node that implements this, unless a lot of nodes do it and are also big enough of radius. As the node is faster in providing all the trie nodes with their proof, it could have most/all of the gossip requests accepted. Where in the regular recursive gossip flow, it would be more of a "fair race" with the other nodes, and thus the uTP transfers would be shared over a bunch of nodes. I think to avoid this possible effect you would have to implement a delayed offering of all those trie nodes. Certainly doable.
In any case, I think this optimization is still useful (especially for larger nodes), but it probably needs to be experimented with a bit to see its effects, and possibly get some tuning, e.g. we could set the amount of nodes it gossips to to lower value for this case.
Closed as we are no longer using recursive gossip.
Adding spec for the optimization regarding storing all relevant nodes from the proof.