Open brenzi opened 1 year ago
weight v2 is really just about storage on parachains. The root problem is that the parachain PoV needs to include all state changes for a block. The bigger the PoV, the higher the effective weight for the relaychain validator to process the PoV. Hereby, storage translates to computation time and computation may take longer than the weight suggests which can make parachains stall. This is not reflected in the current weight model. hence the change to weight v2
There should be a talk by Shawn Tabrizi somewhere that expalins all this
maybe this helps: https://forum.polkadot.network/t/weight-v2-discussion-and-updates/227
on the StorageMap topic: I couldn't find docs, but the idea is that you limit the levels of depth the hashmap to a number which will suffice for its purpose
ok, my current understanding is: as long as we limit the key and value types of storage maps to bounded types, the size of the entire map is bounded and thus also the size of a PoV is bounded.
now my question remains: what is the practical use of this? because even with those constraints, a storage proof can still get infeasibly large.
also, what is the unit of proof_size
in the new Weight
type?
@brenzi i do not necessarily expect answers to those questions, i think it is good to have those thoughts for future reference. (of course, if you have the answers, they are appreciated :) )
I disagree: just because key and value types of a map are bounded is not sufficient. You need to limit how many entries the map can have. For a map, this actually translates to how many levels of a binary tree the map has. So, for each map, we will need to define an upper bound of entries we will EVER need. So, we need to estimate the storage needed if every human uses encointer (worlds population might grow further too) and then see if the weight goes above the weight limit currently and tune down from there to a pragmatic level
ok that somehow also answers my question.
have you seen an example of an implementation of such a limitation of the size of a storage map in the substrate repo? i have so far only seen examples where they migrate Vec
storage values to BoundedVec
and similar.
also, what do you mean by weight limit? where is it defined?
I suggest you discuss both questions with Shawn Tabrizi (or better: ask on StackExchange). I don't know how far they are with examples adn migrating their own stuff
I guess we can close this issue, or is something missing? @pifragile
well, one thing is still open, namely limiting the depth of storage maps in general.
general questions:
BoundedVec
andBoundedSlice
or is there anything else to do?overview of where we might need to take action:
balances: -
balances-tx-payment: -
bazaar:
ceremonies:
communities:
Vec<Location>
Vec<CommunityIdentifier>
scheduler: -
references:
PR "Bound uses of Call": https://github.com/paritytech/substrate/pull/11649
BoundedVec documentation: https://paritytech.github.io/substrate/master/frame_support/storage/bounded_vec/struct.BoundedVec.html
Example of a storage migration to BoundedVec: https://github.com/paritytech/substrate/blob/9b43d8f891b1e53be2aa55687632ac6b17800cf8/frame/democracy/src/migrations.rs#L56
Storage migration documentation: https://substrate-developer-hub.github.io/substrate-how-to-guides/docs/storage-migrations/nicks-migration/