lightninglabs / taproot-assets

A layer 1 daemon, for the Taproot Assets Protocol specification, written in Go (golang)
MIT License
439 stars 106 forks source link

[feature]: Change in meta-data size #910

Open weltitob opened 1 month ago

weltitob commented 1 month ago

I would love to have the feature to specify meta-data size limits in the litd.conf file individually.

I assume it's not an technical issue rather a decision the team made to not bloat litd nodes unresponsibly.

Maybe universes can decide on their metafield sizes individually at some point.

MegalithicBTC commented 1 month ago

Already the raw "gossip" data that we've measured from "Universes" is on the order of 0.65GB, as shown here. So, I wonder, is there is some concern about limiting a huge increase to this amount, for example a user, possibly by mistake, uploading thousands of assets with huge file sizes?

On the other hand, maybe the node operator will be expected to only add to her/her federation those universes who the user trusts to not be overwhelming large in stored data?

I don't actually have a an opinion here, just and interesting idea to think about.

weltitob commented 1 month ago

Already the raw "gossip" data that we've measured from "Universes" is on the order of 0.65GB, as shown here. So, I wonder, is there is some concern about limiting a huge increase to this amount, for example a user, possibly by mistake, uploading thousands of assets with huge file sizes?

On the other hand, maybe the node operator will be expected to only add to her/her federation those universes who the user trusts to not be overwhelming large in stored data?

I don't actually have a an opinion here, just and interesting idea to think about.

Well thats the point if the protocol really has any chances of succeeding the free market should be able to handle that. People should only connect to those federations they trust in while it should ve up to the federation how large they would want to make their files. It is a free protocol right?

weltitob commented 1 month ago

A Mechanism to see how large a federations file size limit is then would be good f.e.

jharveyb commented 1 month ago

Already the raw "gossip" data that we've measured from "Universes" is on the order of 0.65GB, as shown here.

This is a neat metric to have! Given how many assets exist, that's pretty great.

So, I wonder, is there is some concern about limiting a huge increase to this amount, for example a user, possibly by mistake, uploading thousands of assets with huge file sizes?

Yes, this is exactly why the limit exists. The exact limit of 1 MiB is sort of arbitrary, but it helps limit the maximum size of an issuance proof. That proof also includes quite a few merkle tree proofs and other info, so it's helpful to have it as small as possible to make it easy for others to run universes.

On the other hand, maybe the node operator will be expected to only add to her/her federation those universes who the user trusts to not be overwhelming large in stored data?

That sounds reasonable for an individual user, but a wallet provider or explorer operator would likely want to track as many assets as possible.

I think the limit could be increased in the future, but I'd want to see a very strong argument for that. Our recommendation right now is that metadata above 1 MiB could be stored on some other highly-available system like IPFS, torrent, etc., and you can add the reference or lookup info as the Taproot Asset metadata. From what I've seen of metadata for other projects, 1 MiB should cover many use cases, and lossless compression has improved a lot recently.

A universe could have their own (lower) limits that they enforce, but that wouldn't be easy to do right now.

If you wanted a higher limit, for now you'd have to fork the client I suppose.

weltitob commented 1 month ago

Already the raw "gossip" data that we've measured from "Universes" is on the order of 0.65GB, as shown here.

This is a neat metric to have! Given how many assets exist, that's pretty great.

So, I wonder, is there is some concern about limiting a huge increase to this amount, for example a user, possibly by mistake, uploading thousands of assets with huge file sizes?

Yes, this is exactly why the limit exists. The exact limit of 1 MiB is sort of arbitrary, but it helps limit the maximum size of an issuance proof. That proof also includes quite a few merkle tree proofs and other info, so it's helpful to have it as small as possible to make it easy for others to run universes.

On the other hand, maybe the node operator will be expected to only add to her/her federation those universes who the user trusts to not be overwhelming large in stored data?

That sounds reasonable for an individual user, but a wallet provider or explorer operator would likely want to track as many assets as possible.

I think the limit could be increased in the future, but I'd want to see a very strong argument for that. Our recommendation right now is that metadata above 1 MiB could be stored on some other highly-available system like IPFS, torrent, etc., and you can add the reference or lookup info as the Taproot Asset metadata. From what I've seen of metadata for other projects, 1 MiB should cover many use cases, and lossless compression has improved a lot recently.

A universe could have their own (lower) limits that they enforce, but that wouldn't be easy to do right now.

If you wanted a higher limit, for now you'd have to fork the client I suppose.

The point is exactly that, anyone could just fork the client and increase the limit individually. Isnt this a bigger risk? There is no mechanism to protect oder federations from that at the Moment euch federation should be able to Set a filesize limit they accept aswell as the explorers and co. The issue i see here is that this cap is just held by team decision and not by any real one. I see the issue wirh bloating Servers but thats up for solution builders to fix. Putting a manual cap is fine while this is in early usage just to make people aware however in the end the free market is up to decide how to handle filesizes and I feel like a solution like a federation being able to toggle the values on their own is a much better soution that enforcing everyone on it by simply not having the option to change it. It's probably a change in a Single line of code or how is this limit ensured otherwise? I would actually suggest more a approch like euch federarion by default only allows the creation of and accepts only a certain filelimit because this way its not a change in a single line of code but a decision made by node operators

weltitob commented 1 month ago

Its all about building a more open and free future, right? I just feel like lightninglabs should always provide the most free and most customizable solution for node operators 🤷‍♂️ but maybe thats just me someone will fork it in the end anyways and put the huge filesizes out there anyways and then people will have to make some own code to handle with that Individuals.

weltitob commented 1 month ago

Anyways I agree with everything you said and what youre concerend about its just that i think the approch with the federation limits would enforce that better and at the same time open oppertunities whenever needed