polkadot-fellows / RFCs

Proposals for change to standards administered by the Fellowship.
https://polkadot-fellows.github.io/RFCs/
Creative Commons Zero v1.0 Universal
115 stars 55 forks source link

Lowering Deposit Requirements on Polkadot and Kusama Asset Hub #45

Closed poppyseedDev closed 6 months ago

poppyseedDev commented 11 months ago

RFC-00xx-assethub, proposes lowering deposit requirements for NFT collection creation on Polkadot and Kusama Asset Hub to make it more accessible for artists. It draws upon the discussion in the Polkadot Forum. While there are concerns about state bloat, the RFC also considers future governance models to dynamically adjust deposit requirements.

burdges commented 11 months ago

In theory, all prices should adjust downwards somewhat as more cores become available, due to ongoing optimizations. We'll need Kusama to waste have it's core on gluttons of course. We'll spend some cores on system parachain on Polkadot. Other factors like async backing change the price structure the opposite direction. etc. All this has some floor defined by the number & specs of collator, which again depends upon how system parachains wind up being structured. And elastic scaling releaves that slightly. And all values increase like log statesize too.

All this says: It's tricky to automate this too much, but it's still nice not to have the conversation too often.

It'd also be nice if there was a for-profit parachain which did specific things like NFTs cheaper than AssetHub.

cc @jonasW3F

vikiival commented 11 months ago

Hello @ggwpez, as I/we minted a majority of NFTs on AssetHub I can provide some statistic

  1. Amount of collections created
select count(metadata) from collection_entity;
-- 40
  1. Amount of nfts created
select count(metadata) from nft_entity;
-- 2313
  1. Average and max length of metadata bytes stored on chain
select avg(length(metadata)), max(length(metadata)) from nft_entity where metadata is not null;
-- 54.2
  1. median of metadata bytes stored on chain
SELECT PERCENTILE_DISC(0.5) WITHIN GROUP(ORDER BY length(metadata)) FROM nft_entity;
-- 46
  1. Distribution of metadata lengths
select length(metadata), count(*) from nft_entity where metadata is not null group by length(metadata) order by length(metadata) desc;
-- 74,1
-- 73,3
-- 71,651
-- 66,86
-- 59,33
-- 58,1
-- 53,66
-- 46,1472

Regarding attributes on-chain the current utilisation is 0.

It would be nice to see these values in relation to what is actually being stored on-chain. Then we can compare these values to proven spam-resistant storage prices

Let's do checkup on utilization:

  1. What is the median of NFTs minted in one collection?
SELECT PERCENTILE_DISC(0.5) WITHIN GROUP(ORDER BY supply)FROM collection_entity where supply > 0 and metadata is not null;`
-- 4
  1. What is the average and max of NFTs minted in collection?
select avg(supply), max(supply) from collection_entity where metadata is not null;
-- 58.65, 1468

⚠️ Note that all number include both uniques and nfts pallet.

What we can do to prevent spam on the AssetHub?

There are few ways to make offer better utilization for the assethub NFTs. First we can migrate, deprecate and remove uniques pallet from the AssetHub chain. This will dramatically decrease the potential use of space on chain.

Second most important is to decrease effective amount of items from u32 to u16.

Because u32 is mapped as assert_eq!(u32::MAX, 4_294_967_295); in Rust. It is highly impossible to one will mint 4 billion NFTs per one collection.

Third point collection could be potentially forceDestroyed (not-implemented) by the Open-Gov.

TL;DR: There are at least 3 effective ways to prevent 'waste-of-storage', prevent spam, and do effective utilization of the NFTs on AssetHub.

ggwpez commented 11 months ago

Thanks for the data @vikiival, i normally have no introspection into how it is used. This was very helpful!

I checked how many bytes it can take at most to create a collection, and it comes in at around 130 byte. So the 10 DOT seems definitely excessive. When i plug in the 130 byte into the deposit formula then it comes out around 0.2-0.4 DOT. For the ItemDeposit it seems to just be a constant as well, without justification.
Maybe it would be possible to reduce some of these constants to lower the normal storage requirement.

The other constants like itemDeposit etc are already calculated with the 1/100 storage price of Polkadot...

What do you think about the values @joepetrowski, @jsidorenko and @rphmeier?

First we can migrate, deprecate and remove uniques pallet from the AssetHub chain

Yea we should do this. @jsidorenko do you have this planned?

jsidorenko commented 11 months ago

Re migration, I'm waiting on the new runtime release to stabilise and then will release the web NFTs migratory.

As for the deposits, those enormous deposits don't make any sense and prevent us from having users. Last year we had the same issue on Kusama and I decreased the NFTs related deposits 10 times. As a result, we have now 200+ collections created over just 6 months after the release.

joepetrowski commented 11 months ago

Second most important is to decrease effective amount of items from u32 to u16.

This seems like an arbitrary imposition. The economics should provide reasonable bounds on usage, not data overflows.

I checked how many bytes it can take at most to create a collection, and it comes in at around 130 byte. So the 10 DOT seems definitely excessive. When i plug in the 130 byte into the deposit formula then it comes out around 0.2-0.4 DOT. For the ItemDeposit it seems to just be a constant as well, without justification. Maybe it would be possible to reduce some of these constants to lower the normal storage requirement.

The other constants like itemDeposit etc are already calculated with the 1/100 storage price of Polkadot...

It looks like those values are just manually set. I'm in favor of generally using the deposit() function and calculating the storage footprint of items. This then treats pallets "equally". Maybe there are cases where you want to increase or decrease that, to increase or decrease the barrier to using some component, but those should be the exception. This function already accounts for its being in a parachain with a deposit lower by a factor of 100.

poppyseedDev commented 9 months ago

We changed the PR quite a bit according to the requests.

@joepetrowski what are your thoughts?

vikiival commented 8 months ago

@joepetrowski as per commit 3ad68dd73658c8649c3f8d635f987346297b0078

can we keep the deposit as 0.01 DOT per item deposit?

joepetrowski commented 8 months ago

can we keep the deposit as 0.01 DOT per item deposit?

It was 0.005 before, you want it back to 0.01?

vikiival commented 8 months ago

It was 0.005 before, you want it back to 0.01?

Ah, I must have misread the git diff

poppyseedDev commented 8 months ago

Thank you so much @joepetrowski for your changes!

What are the next steps for this RFC?

vikiival commented 7 months ago

Proposal looks amazing 🚀 big kudos to @joepetrowski, @poppyseedDev and others ❤️ ^-^

poppyseedDev commented 7 months ago

Hey @ggwpez could you tell us what are the next steps with this PR?

ggwpez commented 7 months ago

Hey ggwpez could you tell us what are the next steps with this PR?

We will soon propose this for on-chain voting 🙏

joepetrowski commented 7 months ago

/rfc propose

paritytech-rfc-bot[bot] commented 7 months ago

Hey @joepetrowski, here is a link you can use to create the referendum aiming to approve this RFC number 0045.

Instructions 1. Open the [link](https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fpolkadot-collectives-rpc.polkadot.io#/extrinsics/decode/0x3d003e02015901000049015246435f415050524f564528303034352c31623666653638383234643139653033323935386361333432333434366564353866613932303036303563353630393432343230343030656534303862356134290100000000). 2. Switch to the `Submission` tab. 3. Adjust the transaction if needed (for example, the proposal Origin). 4. Submit the Transaction

It is based on commit hash c7edfc33ccb57e881557a5cbbeed478e0d4efc1a.

The proposed remark text is: RFC_APPROVE(0045,1b6fe68824d19e032958ca3423446ed58fa9200605c560942420400ee408b5a4).

github-actions[bot] commented 7 months ago

Voting for this referenda is ongoing.

Vote for it here

joepetrowski commented 6 months ago

/rfc process

paritytech-rfc-bot[bot] commented 6 months ago

Please provider a block hash where the referendum confirmation event is to be found. For example:

/rfc process 0x39fbc57d047c71f553aa42824599a7686aea5c9aab4111f6b836d35d3d058162
Instructions to find the block hashHere is one way to find the corresponding block hash. 1. Open the referendum on Subsquare. 2. Switch to the `Timeline` tab. --- 3. Go to the details of the `Confirmed` event. --- 2. Go to the details of the block containing that event. --- 2. Here you can find the block hash.
joepetrowski commented 6 months ago

/rfc process 0x3ac549f24ecf451eb4d45f16f4fff694b0a103f8006ff9ab7f16c484bcfb535c

paritytech-rfc-bot[bot] commented 6 months ago

The on-chain referendum has approved the RFC.

anaelleltd commented 3 weeks ago

@poppyseedDev, do you have an update on the status of this RFC? Is somebody working on its implementation? Thanks.

joepetrowski commented 3 weeks ago

@anaelleltd it's done and deployed https://github.com/polkadot-fellows/runtimes/pull/237