Agoric / agoric-sdk

monorepo for the Agoric Javascript smart contract platform
Apache License 2.0
326 stars 206 forks source link

IPFS integration and examples in Zoe Contracts #2487

Open katelynsills opened 3 years ago

katelynsills commented 3 years ago

What is the Problem Being Solved?

Dapp developers are used to storing data in IPFS. We do not currently have examples of Zoe contracts that make use of IPFS, but it would be a good area to explore, on a spectrum of small tasks to large tasks:

warner commented 3 years ago

One component of this is purely computational. As @katelynsills and I discussed, it might be convenient to port the IPFS hashing functions into a vat-importable JS library, which could be used to verify Merkle proofs that a given piece of data fits into a given IPFS parent-node identifier. Or to build such a parent node (and learn its hash) from a collection of leaves. This would not involve the kernel or any new authorities: the hashing algorithm is just code.

The "off-chain oracle" component is intriguing. It might overlap with #46, the "large string device" aka "blobcap" plan. To keep large data out of messages, I've been thinking about having an out-of-band mechanism to introduce the data to the kernel: some sort of external call that does a set in a hash-keyed table, then an external message could reference the hash, and turn it into a "blobcap", then subsequent messages strictly use the blobcap reference. The hash-to-blobcap step would only succeed if the external set had succeeded, but all other vats could safely use the blobcap without risk of it being un-dereferenceable. We could use the blobcap to name the contract bundle that should be loaded, or perhaps to perform other indexed lookup of data without pulling all of the data into a vat.

On-chain, this would require that all validators are able to get the large data. It would take some work (and careful analysis of the availability properties), but it'd be neat to use an IPFS hash to name the data, and have all validators be IPFS peers to distribute it. The tricky part would be when a validator sees a transaction that references some data, but that validator is unable to fetch the data (no IPFS peers are willing to share it). In general, all transaction data must be included in the transaction, specifically to avoid this situation.