paritytech / substrate

Substrate: The platform for blockchain innovators
Apache License 2.0
8.39k stars 2.65k forks source link

Support filtering the metadata to only contain information for specific pallets #12370

Open jsdw opened 2 years ago

jsdw commented 2 years ago

It's not infrequent that users only want metadata for specific pallets. For instance, the cargo-contract and staking-miner tools both use Subxt to generate an interface, but both only care about a few specific pallets and don't need information about all of the rest. Additionally, tools like Sidecar implement the functionality to strip metadata to contain only specific pallets in JS land.

One approach would be to implement this stripping of metadata on the client side in tools like Subxt. However, it's non-trivial to recurse through and filter/re-map type information in the already-generated metadata (and we'd need to add an API in scale-info to do this).

However, it turns out to be fairly trivial to produce stripped-down the metadata in the node runtime. Some advantages:

I've opened the PR https://github.com/paritytech/substrate/pull/12369 to explore this approach. To use it, make an RPC request like the following to a node compiled against that PR:

{
    "jsonrpc": "2.0",
    "method": "state_getMetadata",
    "params": [["Balances", "System"]]
}

If no params are provided, the call will do as it does now and return the complete metadata bundle, so from the RPC API perspective it is not a breaking change.

The question is; is this a good idea? Are there any disadvantages that mean we'd rather not have it?

bkchr commented 2 years ago

I'm not convinced by this. Why is it harder to strip the metadata on the client side? Why do we even need to strip? Is there any performance reason? Is the decoding taking that much time?

Especially in the light of: https://github.com/paritytech/polkadot-sdk/issues/291 https://github.com/paritytech/substrate/issues/10057

I don't see any reason why we should do this. If the client side really has problems, I would like to know which kind of problems.

jsdw commented 2 years ago

Why do we even need to strip?

I haven't measured the decode time, but that wasn't something that was an issue for me (although in some contexts perhaps it is important).

From the perspective of tools like Subxt, which store/download metadata to compile an interface with, I think my reasons boil down to:

I don't think we need to do any of this, but I do think that the above has some benefit for users.

I can see the value of pre-generated metadata and being able to hash and sign to verify, so this suggestion does conflict with those. If we want a single metadata bundle then this is obviously a no go.

bkchr commented 2 years ago
  • Smaller metadata takes less space to store and transmit. Eg, we added stripping of metadata in txwrapper because others ran into this:

Whatever passes the data to the signing device, could just strip it down?

I can see the value of pre-generated metadata and being able to hash and sign to verify, so this suggestion does conflict with those. If we want a single metadata bundle then this is obviously a no go.

Hashing the metadata will be very important IMO and that will for sure require the entire metadata. We may could put parts into a trie and use that to make it smaller. Could maybe improve offline signer devices, because we don't need to send the entire metadata.

jsdw commented 2 years ago

Hashing the metadata will be very important IMO and that will for sure require the entire metadata. We may could put parts into a trie and use that to make it smaller. Could maybe improve offline signer devices, because we don't need to send the entire metadata.

Mmm, with hashing I could imagine we could hash each pallet separately and then hash those hashes, and require that final hash in the tx payload. Whether a client obtains full metadata, or just certain pallets + the remaining hashes, I think it would be able to have confidence that the metadata it cares about was not different from what the node expects then.

The complexity there would be how to do the hashing of each pallet individually (you can't just hash some bytes any more I think); we do some stuff in this area in Subxt to validate that code generated is still in sync with the node being talked to, but of course we don't need to provide the same level of security guarantees, and can ignore some stuff.

Whatever passes the data to the signing device, could just strip it down?

With the above sort of thing in place, I guess a signing device would have everything it needs to be passed only the metadata it cares about and still include a valid hash in the signed tx it spits out.

If the hashing was just a simple hash based on the bytes of the full metadata, a signer that received steripped down metadata would have no real way to know what metadata hash to add to the payload, I guess. So whether stripping happened on the node or the client, maybe the hashing stuff needs to take it into account (or just discount the possibility of being able to strip metadata at any stage)?

bkchr commented 2 years ago

So whether stripping happened on the node or the client, maybe the hashing stuff needs to take it into account (or just discount the possibility of being able to strip metadata at any stage)?

If we have some sort of "metadata merkle proof" we could indeed only pass the relevant parts to the offline signer plus the proof. Otherwise we would need to send the entire blob.

burdges commented 2 years ago

Should nodes choose in advance for which pallets they prepare extracted metadata? Or they'd only do this upon request? (fixed tense)

bkchr commented 2 years ago

We currently don't do this and I'm still not convinced that the node/runtime should do this.

tomaka commented 2 years ago

I'm very much against doing what the OP says.

We don't want to introduce multiple flavors of metadata. This makes everything more complex. Our number one priority in everything we design nowadays should be to reduce the complexity as much as possible.

If you manipulate subsets of the metadata, please for the sake of our sanity do not call it "metadata" and do not make it possible to confuse it with the metadata.

jsdw commented 2 years ago

I don't think that this introduces multiple flavours of anything personally; it just acknowledges that on the whole we can see metadata as being per-pallet and not one giant blob. It's just taking an API call that returns "all of the items" and adding the ability to state which items you want back.

I do very much agree with keeping things simple though, and since it's been highlighted that this change would add complexity to things like hashing or pre-compiled metadata (which prefer a single big blob), I very much understand and appreciate that objection.

I would just say that if we will ever want the ability to split metadata at any stage (to pass only the required information to signers for instance), then we'll need to work out things like signing metadata with that in mind anyway, and then the added complexity of this change in itself becomes very small (and in fact forces future changes to take it into account).

Happy to abandon this idea though if that's the consensus and cross that bridge if and when we get to it.

bkchr commented 2 years ago

I would just say that if we will ever want the ability to split metadata at any stage

This can just be done wherever you need to send this data to the signer. It just makes no sense to do this in the runtime. Think about you are having some kind of wallet. In your way you would request every time the chunk of metadata for the transaction you want to send and then pass this to the signer? You can just get the entire metadata and split it in your local application.

burdges commented 2 years ago

At worst Merkle tree hashing incurs 2x the cost of hashing whole blobs, assuming you've sensible hashing. In fact, blake3 is already a merkle tree internally, which enables doing many nice things: https://www.publish0x.com/rhyzom/blake3-a-fast-and-efficient-parallelizable-merkle-tree-hash-xgzkye

tomaka commented 2 years ago

This can just be done wherever you need to send this data to the signer. It just makes no sense to do this in the runtime.

I just want to add to this that to me the direction what we should take, and the direction I took with the new JSON-RPC API is "batteries not included".

In other words, the new JSON-RPC API does not try to give you exactly what you want. It gives you a set of low-level primitives, and on top of that you need a non-trivial piece of code to manipulate the data that you receive.

It is not realistic to add or modify JSON-RPC functions to suit exactly for every single use case. This will never work. The JSON-RPC API is a clear API boundary and needs to be as stable and precise as possible.

To me the reason why we've done a "batteries included" JSON-RPC API in the past because people writing code on top of this API don't want to learn how "blockchain stuff" works. They want information X, and since they don't know how to calculate X from what the API returns, they ask other people to add a new JSON-RPC function that returns exactly X. That's to me not a good reason.