Open vncoelho opened 5 years ago
What kind of cache?
Two arrays, one for preparations and other one for commits.
Cache is useful when you need to query the same thing in many times, but every blocks we need different preparations and commits, so how a cache can help us in consensus?
Not a cache in terms of forever caching, @shargon. It is just a temporary variable that will store payloads that would be discarded.
You are CN at height 10 and a payload from height 11 arrives. Meanwhile you will receive the block 10 and nowadays you would lose the payloads that you have received. The idea is to cache the highest height.
Now dBFT has been successfully deployed on MainNet, it is time to think about the future again! :dagger: aheuyahueahehauea Time to play again :dancer:
Congratulations to everybody engaged in the dBFT improvements. It was surely a great achievement to all of us!
@vncoelho do you think this is a duplicate of #909 ? Do you think they "solve the same problem"?
What a coincidence, I just saw your comment now, @lock9...aehuahuea After the draft PR. huaheuahea
This is a different idea, not related.
I think this is a good idea. Currently Recovery is used in case of lagged CN, but this might be time consuming. A worst case is that normal CNs hangs there waiting for one lagged CN, to produce a new block, but this lagged CN might be waiting for recovery messages... Consensus message caching could possibly save much time in such case.
@vncoelho should we keep going with the discussion or its good to close it?
This can be implemented right away, like it's done in the https://github.com/nspcc-dev/dbft
This can be merged
This can be implemented right away, like it's done in the https://github.com/nspcc-dev/dbft
@roman-khimov I really wish i could steal your documents and examples to C# core, so complete and detailed.
I will claim this then after the repo merge.
In a conversation with developers from the SPCC community @fabwa, Anatoly Bogatyrev and Evgeniy Stratonikov, they suggested a caching mechanism for the payload that arrive from CN. In fact, this mechanism has been already implemented by them in their ongoing GO implementation.
I also believe that it may speed up in some particular cases.
In this sense, even if the CN is lagged behind it would:
This would speed up and increase the chance that a lagged CN will assist the creating of the latest block, when it is not on top of the last block that is being created.
What do you think, @neo-project/core?