encointer / pallets

all application-specific pallets for encointer
GNU General Public License v3.0
19 stars 3 forks source link

strictly use bounded types (for weight V2) #302

Open brenzi opened 1 year ago

pifragile commented 1 year ago

general questions:

overview of where we might need to take action:

balances: -

balances-tx-payment: -

bazaar:

ceremonies:

communities:

scheduler: -

references:

PR "Bound uses of Call": https://github.com/paritytech/substrate/pull/11649

BoundedVec documentation: https://paritytech.github.io/substrate/master/frame_support/storage/bounded_vec/struct.BoundedVec.html

Example of a storage migration to BoundedVec: https://github.com/paritytech/substrate/blob/9b43d8f891b1e53be2aa55687632ac6b17800cf8/frame/democracy/src/migrations.rs#L56

Storage migration documentation: https://substrate-developer-hub.github.io/substrate-how-to-guides/docs/storage-migrations/nicks-migration/

brenzi commented 1 year ago

weight v2 is really just about storage on parachains. The root problem is that the parachain PoV needs to include all state changes for a block. The bigger the PoV, the higher the effective weight for the relaychain validator to process the PoV. Hereby, storage translates to computation time and computation may take longer than the weight suggests which can make parachains stall. This is not reflected in the current weight model. hence the change to weight v2

There should be a talk by Shawn Tabrizi somewhere that expalins all this

brenzi commented 1 year ago

maybe this helps: https://forum.polkadot.network/t/weight-v2-discussion-and-updates/227

brenzi commented 1 year ago

on the StorageMap topic: I couldn't find docs, but the idea is that you limit the levels of depth the hashmap to a number which will suffice for its purpose

pifragile commented 1 year ago

ok, my current understanding is: as long as we limit the key and value types of storage maps to bounded types, the size of the entire map is bounded and thus also the size of a PoV is bounded.

now my question remains: what is the practical use of this? because even with those constraints, a storage proof can still get infeasibly large.

also, what is the unit of proof_size in the new Weight type?

@brenzi i do not necessarily expect answers to those questions, i think it is good to have those thoughts for future reference. (of course, if you have the answers, they are appreciated :) )

brenzi commented 1 year ago

I disagree: just because key and value types of a map are bounded is not sufficient. You need to limit how many entries the map can have. For a map, this actually translates to how many levels of a binary tree the map has. So, for each map, we will need to define an upper bound of entries we will EVER need. So, we need to estimate the storage needed if every human uses encointer (worlds population might grow further too) and then see if the weight goes above the weight limit currently and tune down from there to a pragmatic level

pifragile commented 1 year ago

ok that somehow also answers my question.

have you seen an example of an implementation of such a limitation of the size of a storage map in the substrate repo? i have so far only seen examples where they migrate Vec storage values to BoundedVec and similar.

also, what do you mean by weight limit? where is it defined?

brenzi commented 1 year ago

I suggest you discuss both questions with Shawn Tabrizi (or better: ask on StackExchange). I don't know how far they are with examples adn migrating their own stuff

pifragile commented 1 year ago

posted here

clangenb commented 1 year ago

I guess we can close this issue, or is something missing? @pifragile

pifragile commented 1 year ago

well, one thing is still open, namely limiting the depth of storage maps in general.