Open kitty-kad opened 2 years ago
Since policies (token-policy-v1
) make collections very easy, I suggest an interface functioning at the level of policies to identify collections and offer related functionality.
Note that in marmalade, tokens are not just 1-offs. So hardcoding things around "token owners" is problematic: a single token can have a maximum supply of say 5.0
with 0 precision
meaning there are five fungible units of fractional ownership, thus having up to 5 owners. An ERC-721-style NFT is therefore supply 1.0, precision 0. You can see however that "enumerate all owners" rapidly becomes out of control if you have tokens with say 100 fractions.
But policies exist to simplify the situation by supporting specific use cases. Remember that marmalade.ledger.create-token
specifies a policy irrevocably for a token. Thus N tokens sharing the same policy implicitly form a collection. (Note that create-token
is not currently specified in any interface, which needs to be corrected).
Thus, the simplest collection implementation records all tokens added (captured via enforce-init
). It could then support operations to list token IDs, etc. (Note this does not need select
as the ids can be kept in a list in a single DB record.)
However, it's easy to have policies that manage multiple collections. Now operations simply take a collection-id
to distinguish.
Finally, a collection can be only for 1-1s by enforcing supply of 1.0 and precision of 0, and therefore be enumerable: every token in a given collection has a distinct owner.
Thanks fo the response!
suggest an interface functioning at the level of policies
Having this interface be implemented on policies make sense for marmalade based NFTs. However since there could be multiple collections - would it simplify things by having it implemented at the same level as poly-fungible-v2? (though the collection-id
approach you mentioned later would address this)
You can see however that "enumerate all owners" rapidly becomes out of control
In this case - the proposed interface isn't to enumerate all owners, the only owner specific functions are get-all-on-sale-for-owner
and get-all-owned-by-user
. I'm not sure how I understand why this would be an issue even with fractional ownership.
operations simply take a collection-id to distinguish
Would the main change as a result of this be to add collection-id
as an input to the methods in the interface specified above?
Great to see some activity here!
Providing clarity on these issues is essential to move towards developer adoption.
"Marmalade tightly couples interfaces and implementation details which would either prevent innovation or cause divergence of marketplace interfaces." This is a real problem @sirlensalot ! Divergence of marketplace interfaces will naturally occur if the standard is incomplete or lacking clarity. Do we have any idea how to prevent this divergence , when builders will naturally build?
I wholeheartedly support community engagement and continuing to build Marmalade in public while addressing its shortcomings. There is a reason this was posted December 2nd and there are less than a handful of live projects utilizing the standard. There has been too much propaganda and lauding Marmalade it in public posts. There has been not enough work improving clarity and addressing the problems limiting development velocity and user adoption.
It is very encouraging to see competent developers like Kitty assisting the Kadena team in these tough markets!
Hey beautiful discussion, to your answer
Thus, the simplest collection implementation records all tokens added (captured via enforce-init). It could then support operations to list token IDs, etc. (Note this does not need select as the ids can be kept in a list in a single DB record.)
However, it's easy to have policies that manage multiple collections. Now operations simply take a collection-id to distinguish.
But pushing organization to policy means it's harder for newbies to implement correctly, i think adding collection organization to ledger will make it easier for people to create NFTs off the boat.
It can be very simple : Optionally add registering under a collection, and have a single table that returns token list. NFTs not registered under collection will be global, you can use collection id in the token key similar to what babena did to index tokens. (& ofcourse this would enforce supply and precision on unique tokens created under collection)
(defschema collection collection-id:string tokens:list max-unique-supply: ....others )
Grouping by collection is fundamental NFT logic and don't see why ledger can't inherently support it, and policies can still support custom groupings/collections
(Sorry to hijack discussion alittle, but this could also make market-placeable/other interface easier to implement over collections)
Also policy could benefit from having enforce-withdraw incase it wants to track open sales per user
To track open sales for buyers/nfts :
Currently, the SALE escrow handles "timeout" through capabilities, by checking if we're within timeout, but does not pass to policy.
Having it at policy level lets policy track open sales per nft or user more easily as well, could we possibly have policy enforce-offer take timeout (Passed from offer on ledger)
@Gh0stts0hg thanks for jumping into the discussion, good to see other people get involved :)
don't see why ledger can't inherently support it
One thing with having it at a ledger level, is then it could cause performance issues. In general there was concerns raised on my other KIP about running select queries https://github.com/kadena-io/KIPs/issues/24 For a collection, this shouldn't be too much of an issue, but for a ledger potentially storing millions of NFTs, it could be tricky.
could we possibly have policy enforce-offer take timeout
I think this could be good, but potentially a different KIP?
Would the main change as a result of this be to add collection-id as an input to the methods in the interface specified above?
@kitty-kad yes maybe? The operations above don't have any arguments specified currently so it's hard to answer that question.
As for implementation, you can imagine a deftable collections{collection}
that is keyed on collection-id, and each row has a list of the tokens managed in that collection, e.g. (defschema collection tokens:[string])
.
However since there could be multiple collections - would it simplify things by having it implemented at the same level as poly-fungible-v2 (@kitty-kad ) But pushing organization to policy means it's harder for newbies to implement correctly (@Gh0stts0hg)
I don't agree that policies are newbie-hostile, if anything they reduce the amount of copy-paste required to launch a token. That's kind of the point: copy-pasting an entire ledger just to customize a few lines is madness. Just because Ethereum does it this way doesn't mean it's right or simple or good.
With the currently-deployed marmalade.ledger
on chain 8 you can implement a POC of @kitty-kad's design as a policy in mainnet or testnet with zero risk to other tokens on the ledger.
Also policy could benefit from having enforce-withdraw incase it wants to track open sales per user
could we possibly have policy enforce-offer take timeout (Passed from offer on ledger)
Both great suggestions for a token-policy-v2
:)
yet another interface
This doesn't bother me, especially at the token policy level.
This proposal is exactly what we've been talking today: lack of an interface that should provide these ^ useful methods, enabling marketplaces and wallets to read data in a predictable way. Take XWallet's UI/UX, or any other compatible wallet, for example: adding a new coin means typing the coin contract reference, then .get-balance() is called. It should be the same for NFTs: enter the contract reference, then .get-owned-tokens should return a list of all owned NFTs.
If the computational effort is too big (>150k gas), depending on table sizes, then it could be offloaded off-chain, by storing some of the data in traditional databases, but providing this information upon request in the form of Decentralized Identifiers / Verifiable Credentials.
I've been researching how to write and submit a new DID Method spec to W3C to integrate Kadena with DID/VC - I'll also open an issue for this so anyone could contribute
@sirlensalot agree with most of what you said
One question
each row has a list of the tokens managed in that collection, e.g. (defschema collection tokens:[string])
Since each row is a collection, I'm wondering if the gas costs to write could be too high? I'm assuming that adding a new NFT to the collection would require pulling the list of all the ids in the collection and appending one more entry to it.
Not sure how gas expensive this would be.
@nzpopa thanks for sharing your thoughts and good to get more data-points on whether this is an issue or not :)
If the computational effort is too big
I don't think there would be much difference if DIDs are used over regular nft IDs. The data returned by the APIs are mainly IDs of nfts, so whether these are DIDs or normal IDs doesn't matter.
Since each row is a collection, I'm wondering if the gas costs to write could be too high?
It's a valid concern. The cost will escalate with every token added, so this is only a solution for small collections.
This is why any kind of get-all-XXX
is problematic though, and leads back to the idea that queries are best supported via an event db like chainweb-data.
Note that this is also what Ethereum dapps that need to scale have to do as well, using The Graph or etherscan. Blockchains and smart contracts are great for transactions/OLTP, but are simply not suitable for data warehousing/OLAP.
adding a new coin means typing the coin contract reference, then .get-balance() is called. It should be the same for NFTs: enter the contract reference, then .get-owned-tokens should return a list of all owned NFTs.
get-balance
retrieves a single decimal value, which scales well. get-owned-tokens
will necessarily increase in cost as the number of tokens grows, so it does not scale past small collections, as discussed in the other comment.
Could we lower computation by indexing differently, data is cheaper than computation here i would assume given we're on kadena
Since each row is a collection, I'm wondering if the gas costs to write could be too high?
if we adjust @sirlensalot implm here from:
As for implementation, you can imagine a deftable collections{collection} that is keyed on collection-id, and each row has a list of the tokens managed in that collection, e.g. (defschema collection tokens:[string])
and instead index the tokens in a collections; you can store total tokens # (defschema collection total-unique-tokens:integer)
& Have a new table coll-token-index table has token-id for that index :
key = collectionid.index (defschema collection-token-index token-id:string)
Instead of "get all" you can "get by index"
Requires managing this new state, but that is a very short computation compared to a select on a large ledger. and really we store only id & index extra but this is still a duplicated ledger and you need to maintain it's state..
Same concept can apply to "NFT By account" For example, if you stored at policy/collection level an account table (defschema account total-tokens-owned:integer)
then stored account.index -> token id in a separate table (defschema account-tokenid-by-index total-id:string)
you could then get total number of NFTs an account owns in collection, and NFT by owner by index.
acc1 has 3 NFTs, so
account-ledger : acc1-> {owned-tokens : 3, guard : keygojingle}
token-id-by-index : acc1:0 {token-id : "t1" } acc1:2 {token-id : "t2" } acc1:3 {token-id : "t4" }
@Gh0stts0hg in your scenario - adding (writing) is O(1), which will be very gas efficient.
Reading will be O(N) since it can be a SELECT keys -> filter where key contains collection id However this is less of an issue since we only expect reads to be done via a local endpoints and never during a call in which data is written to the blockchain.
At the end of the day - each collection could implement their own way of doing things if they like too, but since there's a few viable options, I think it shouldn't be an issue that blocks this interface request.
What are the best next steps? (assuming no one else has anything else to add)
@sirlensalot should I fully define the methods in the interface (inputs, outputs, commenting etc) ?
@Gh0stts0hg @kitty-kad it took me a minute but I finally saw the "get-at-index" light.
It's the right answer, and has the benefit of being familiar to Ethereum devs.
IMO it would look like the following, with more Pact-y naming:
(defun get-collection-token-count:integer (collection-id:string))
(defun get-collection-token:string (collection-id:string index:integer)) ;; returns token-id
(defun get-account-token-count:integer (account:string))
(defun get-account-token:string (account:string index:integer)) ;; returns token-id
One question would be if get-account-token*
should take a collection ID or span collections ...
EDIT: BTW the big advantage of the indexed approach is it does NOT offer any kind of get-all-XXX
functions. This is because a user can trivially iterate:
(map (get-collection-token) (enumerate 0 (get-collection-token-count)))
this is WAY better than having an interface function get-all-XXX
because it moves the control over gas consumption to the usage site. If your policy needs to iterate, it can use the map
code on the blockchain; whereas if your front end wants to query, it can use the map
code in a Pact transaction.
EDIT 2: note this is only for one-off NFT collections which is why this makes sense at the policy level. As soon as you have fractional ownership you'd at a minimum have to include balances in the return values. This is of course possible but the headache of maintaining the index becomes far greater. Whereas, implementing a policy that enforces max-supply
of 1
and precision
of 0
can do this style of collection management easily.
Thanks for your answer, and loved the enumeration logic explanation!
One question would be if get-account-token* should take a collection ID or span collections ...
here if you mean having global indexes (for an acc, index 0 is from Col.A and index 1 is from Col.B), one concern is that if collection org is at policy level it makes this hard to do. so i think it should be collection locked
EDIT 2: note this is only for one-off NFT collections which is why this makes sense at the policy level. As soon as you have fractional ownership you'd at a minimum have to include balances in the return values
@sirlensalot not sure i fully understand why is this true, we can still return the token index only, and then fractional ownership is another level, call the ledger with acc/tokenid to get fractional amount, thereby keeping indexing separate, example of fractional :
Essentially "Owned-Tokens" is any token you can claim fractional ownership of, rather than full ownership
acc1 has 1 NFT it owns 0.2% of
acc2 has same NFT it owns 0.8% of
account-ledger :
acc1-> {owned-tokens : 1, guard : keygojingle}
acc2-> {owned-tokens : 1, guard : wherermykeys}
token-id-by-index :
acc1:0 {token-id : "t1" }
acc2:0 {token-id : "t1" }
then just call a ledger getter like
(defun token-ownership-for-account:double(account:string token-id:string))
which returns fractional ownership of nft and optionally supply/precision, directly from ledger keyed by acc:token
Edit : You can introduce a flag in index table to say EITHER
a. if you own the full token b. if its single supply and 0 precision (flag so that you know you need to query on amount )
But this is just an optimization
Edit 2 : Sirensalotgenius :is this also relv to implement for nft by owner, my concern is duplication of data for collections and for ownership make storage needs large. thoughts ?
we can still return the token index only, and then fractional ownership is another level, call the ledger with acc/tokenid to get fractional amount
That's true. As long as it's documented in the interface that's probably sufficient.
my concern is duplication of data for collections and for ownership make storage needs large
It's more writes but only by constant factors (ie two/three writes instead of one on insert, etc). So while it takes incrementally more gas, it scales -- it doesn't get more expensive as time goes on. Pact is designed around making key-value access fast and easy so the indexing is a natural fit. The only thing that's non-Pact-y is how low-level it is (accessing just IDs instead of rich objects) but I think that's actually more flexible/powerful.
very good points & thank you very much for discussion!
i will create policy based on both (directly in implementation, nothing on interface level for now) and share here if anyone wants to use.
EDIT : I do want to identify the right place to put collection org, there are huge benefits letting ledger orchestrate it, even through a different interface. Given 1) works for all supplies 2) no perf impact, infact it ENFORCES the efficient approach to collections, after the POC through policy implementation, i will follow up on how it can be done better.
then stored account.index -> token id in a separate table
In this case, if someone sells an NFT and index 0, it'll require putting a different NFT they own at index 0. The simplest implementation I can think of would be to just take the NFT at the last index and put it there
@Gh0stts0hg just confirming if this was intentional and if you had any other ways to address this
One question would be if get-account-token* should take a collection ID or span collections ...
Having the collection id be explicit simplifies a lot of interactions because
In this case, if someone sells an NFT and index 0, it'll require putting a different NFT they own at index 0. The simplest implementation I can think of would be to just take the NFT at the last index and put it there
That's a good point. You end up with garbage over time.
What do Ethereum tokens do about this? This seems pretty concerning.
@kitty-kad yep that is correct, it falls under
Requires managing this new state
like you said shift from last, this is probably the easiest way to deal with it, yeah you will have "garbage data" but this is not in memory this is storage, and id argue it's ok because its indexed and can be re-used (buyer guys their nth nft he can re-use previous garbage index of n through update).
For collection id
so you are suggesting keying at like :
<collection-id>.<account>.<index>
can be re-used (buyer guys their nth nft he can re-use previous garbage index of n through update).
Mutability makes this index degrade to O(n) on write. This means lists are asymptotically equivalent ... it seems like for a mutable index we need a set data type in Pact after all, but for now I'd go back to lists.
There's no magic bullet for open-ended custody support, it's kind of what ledgers are for, and ledgers need indexers. So I'm just not sure it makes sense to have get-all-tokens-for-owner
for anything except small populations.
Note that for an immutable (or append-only) index, this scheme can work. But even here, if your "collection" is in the 1000s, this is just saving on write, to make reads enormous.
So, this kind of interface I think is only for small collections in any case. If so, then just use lists.
Hey @sirlensalot , not fully sure i followed this
Mutability makes this index degrade to O(n) on write. This means lists are asymptotically equivalent ... it seems like for a mutable index we need a set data type in Pact after all, but for now I'd go back to lists.
when you say mutability degrades it to O(n), how come ?
for writing : there still should be no gaps in the index, were just shifting from last if someone sells, so always write to max + 1.
or do you mean because you have to "check for existence of a key" first ?
Also how does it make reads enormous, are you referring to "enumerate all" being enormous ?
for writing : there still should be no gaps in the index, were just shifting from last if someone sells
Ah got it. If index X sells, write index(LAST) there, and decrement LAST. That's great ... once you know the index :)
or do you mean because you have to "check for existence of a key" first ?
Somebody does right? If you want to update ownership for a token you have to search the index. This means one of two things:
The code has to iterate over the index as part of the update/write operation (needs select
, is O(n))
The update API has to push the logic for locating the index to the front end and accepting it as part of the update. Works, but crappy UX because the tx can fail/consume all gas.
you referring to "enumerate all" being enormous ?
This is more the general issue of ledgers. Indexers are the right answer for scale in any case. The algorithm above will be a nightmare for collections in the 1000s, whereas an indexer (chainweb-data
etc) is the right shape for any size ledger.
Looking a little more at ERC721Enumerable found this: https://docs.openzeppelin.com/contracts/4.x/api/token/erc721
IERC721Enumerable: Optional extension that allows enumerating the tokens on chain, often not included since it requires large gas overhead.
I think the consensus is this is not a scalable approach. However happy to see evidence to the contrary.
However, that's why a policy-based interface makes some sense here, with the implication that this is for small collections where the gas required is manageable. If so I still think the list-based approach is way easier and better UX.
Something I would like to bring up, that might go a bit beyond what you are currently discussing, is the ability for a smart contract to define how to retrieve the URI of an NFT: (defun uri:string (id:string) ...)
.
Current ERC721 standard allows a great deal of flexibility for the developer, letting them have dynamic NFTs: You can define and change NFT state for example, or increase the defined stats. This is a must have for blockchain video games.
Currently, marmalade doesn't support this, but I believe it is a valuable piece of the ERC721 standard, and if marmalade didn't have it, would lead to this increased divergence discussed above.
just saw earlier reply, will look over in detail but for now will assume token list approach is the way to go, thanks again!
letting them have dynamic NFTs: You can define and change NFT state for example, or increase the defined stats.
You can define state in your manifest, its actually a much better organization of the data imo. as for changes, that would need manifest upgradability, i believe the marm folks are working on timestamping changes, in the meantime my team is also doing something similar until they do (will open a separate issue, to keep this thread relvt.)
Happy to keep get-all-tokens-for-owner
optional - it makes sense based on the conversation above.
Also another marmalade related concern came up while implementing a marketplace - How to handle capabilities for royalties easily?
Is the recommended best practice to
(coin.transfer buyer module-account price)
. This way only one capability is needed to be grantedIf not - then we'll need to extend the above interface to return a list of fees / capabilities.
Continued discussion about royalties here https://github.com/kadena-io/KIPs/issues/27
This proposal is for a new
marketplace-able
interface for NFTs It is largely to make it easier for projects to develop marketplace experiences around NFTsCurrent Problems
Non-Goals
Design Considerations
Interface
This is the recommended interface for NFT projects to implement.
Benefits
Downsides
Disclaimers
For brevity and speed of discussion - event emissions have been ignored , but can / should be discussed if the proposal proceeds