Open nazar-pc opened 1 year ago
Currently in substrate there are two types of pruning:
--state-pruning
: prunes the states of old enough blocks,--blocks-pruning
: prunes the block bodies of old enough blocks,AFAIK in neither of those modes, the block headers are pruned, so it will always be possible to query hash and header, but with state-pruning
you will not be able to query a key/value in the state trie, and with blocks-pruning
you will not be able to query the block body (i.e. the list of extrinsics).
Yeah, I know there are two pruning modes, it seems like state pruning is implicit when block pruning is enabled. I also was bitten by the fact that state pruning is not bound by finality in contrast to blocks pruning for some reason.
So would it be consistent to have header pruning or something else would be preferred upstream?
So would it be consistent to have header pruning or something else would be preferred upstream?
Probably yes, but I don't know why we currently don't have it. @arkpar can probably answer this?
In general, most consensus engines that we support require at least some header history to be able to sync with fast/warp sync. E.g. BABE/grandpa needs to be able to validate headers where validator set change took place. Header sizes don't take much space compared to bodies, so we always keep them to provide better overall security for the network.
This implementation is intended for running a full node. It does not have a goal to maintain "bounded disk usage". There's no bound on the state size in the first place. If that's required for your case, you'd probably be better off with a light client.
I see that it makes sense for consensus engines supported upstream, but with Substrate being a framework, it is helpful when it is flexible enough to do those things. In our case with Subspace, consensus nodes doing extremely little computations, they're just producing blocks and securing the network, while separate class of nodes are doing execution of user transactions. As such the state consensus nodes have to maintain is very small, I think we can even make it bounded if we have to.
As to light client - it is kind of close, but we do want for consensus nodes to do the complete consensus verification and with removal of light client in Substrate the only way to run the whole protocol is to run fully-featured node.
Using significant amount of space for Subspace is especially problematic since block production power is proportional to the amount of space pledged to the network. So every gigabyte used by node is contributing towards centralization, we call it "farmer's dilemma".
What I'd like to see in terms of API would be ideally a way to programmatically prune blocks, state and headers at arbitrary depth without finalizing blocks first (there is no PoS notion of finality in our protocol). If finality is no possible to bypass, then constrain to finalized blocks. Right now we use finalization as a tool to do pruning at dynamic depth and expect for state to disappear alongside blocks, but I don't think there is a usable API to do those things explicitly, at least I have not seen them exposed. And as mentioned, headers (maybe something else?) are stored forever, which we don't want/need.
I guess we could add an option to delete headers as wll. I would not object such PR. It should not be the default when --block-pruning
is specified though.
I expect that at this point node doesn't have anything stored for blocks below 9199.
both --state-pruning
and --block-pruning
specify the number of finalized blocks to keep. So in your example @nazar-pc blocks 8943-9199 should still be fully available.
both --state-pruning and --block-pruning specify the number of finalized blocks to keep. So in your example @nazar-pc blocks 8943-9199 should still be fully available.
Right, that actually brings another related topic. I'd like to decouple finality from pruning completely (at least as an option).
Specifically, in our protocol we prune blocks, but they still retrievable from archival storage using distributed storage network (DSN). However, archiving depends on volume of data. So if we finalize some block, we do not yet know when it is safe to prune it and constant in terms of block numbers doesn't cut it, we need to make it based on size.
Right now what we do is delay finalization, which is an awkward workaround to delay pruning. Would be great if we could run Substrate essentially in archival mode and then prune blocks/headers/state we want with an API when it is safe to do so (blocks are guaranteed to be available via DSN).
What would such API look like, something similar to finalization call perhaps?
What kind of an API do you have in mind here? RPC or an offchain worker? I don't think we can expose that kind of stuff to the on-chain runtime logic.
I was thinking of client-side logic. There is Finalizer::apply_finality()
and Finalizer::finalize_block
already. So maybe Pruner::prune_block()
, Pruner::prune_state()
and Pruner::prune_header()
or something similar?
I'd like to decouple finality from pruning completely (at least as an option).
On the database level we should actually not speak about finality, because the database should not need to care what is finalized or what is a best block. The database should only expose an interface to call "prune" that prunes then to the given block. However, this will require some bigger refactoring. Can you not just continue using apply_finality
and we add an option to the database to also prune headers (this would be passed at initialization of the db)?
Can you not just continue using apply_finality and we add an option to the database to also prune headers (this would be passed at initialization of the db)?
There are many tools like chain indexing that will only work with finalized blocks. In this case workaround would mean that blocks could take a VERY long time to finalize, this severely damages user experience. Right now we have to patch those tools to accept non-finalized blocks and accept that there might be issues when reorgs happen, which are all the things we'd like to avoid.
If you would have custom pruning, it would also create issues with reorgs?
I do not think so, the assumption is still similar to current: we finalize block M and then prune something that is M+N deep. The difference is that right now N is a static value provided when node starts via CLI parameter and I need to to be dynamic based on application logic.
Is there an existing issue?
Experiencing problems? Have you tried our Stack Exchange first?
Description of bug
It might be the way it was designed to work, but it doesn't match my expectations for sure.
I have a node with pruning set to 256 and in this state:
I expect that at this point node doesn't have anything stored for blocks below 9199. In practice, however:
0x150e9d30d092aa25db6d11ebf24086ba67bc9ff83b2f1991577a10e2592b5b58
By hash I can also query the header:
Block body (through RPC) returns
null
, which also causes confusion in Rust world where it isNone
, so block request kind of succeeds, but returns no extrinsics, which disrupts expectation of developers (we had some confusion in the past because of this).The expectation here was to have bounded disk usage by limiting how many blocks and related data we store on disk, however it seems like storage will grow infinitely instead.
Inc case this is a desired behavior I'd like to know why and please consider this is a feature request to add header pruning.
Steps to reproduce
No response