Closed wesbiggs closed 2 months ago
We cannot expect consumers to try all possible options for CIDs, which can have various bases, hashes, and codecs, not to mention chunk sizes.
Minimally we should allow for the defaults currently generated by ipfs add --cid-version 1
, which is base32
sha2-256
dag-pb
for chunked files > 256*1024 bytes, and base32
sha2-256
raw
for files that fit in a single 256kb chunk.
I suggest we standardize on:
base32
or base58btc
(though it is trivial with the multiformats
library to support others)sha2-256
only (this is the only option implemented in the @ipld/dag-pb
javascript library at present)dag-pb
or raw
(with raw
only useful when file size is <= 256kb).Edited to add: We also have to make assumptions about chunk size. The simplest approach is to try with 256kb chunks, but it's worth noting that this is merely a default in common IPFS utilities. Do these "impedance mismatches" make the whole idea of treating CIDs as hashes untenable? IMO enough of the content files DSNP will be addressing will fit in one chunk to make it useful for comparisons, even if we can't categorically expect to be able to prove that a given byte array is equivalent to a given CID (i.e. there is a chance of false negatives). However, the pragmatic goal is to be able to use CIDs in announcements and have them be directly useful for locating content on IPFS. To accomplish this, it seems reasonable to mandate a particular set of parameters for CID creation.
url
field? When would the url
be used?
Thanks for the notes. Agree with the commentary on url
usage. Note that nothing in the proposed spec requires the use of IPFS. The presence of a CID does not imply that a document with that CID is or will be available on IPFS.
There must always be a way to verify the content retrieved via the url or via the network
Per my musings earlier in the thread, "always" is tricky with CIDs. We can say that if the CID was generated using common conventions for parameters, we should be able to verify it. But it is possible for the original CID creator to chunk a file in unexpected ways that would make regenerating the same CID virtually impossible without knowledge of the exact chunking parameters. We need to decide if this possible ambiguity is acceptable.
Minimally we should allow for the defaults currently generated by ipfs add --cid-version 1, which is base32 sha2-256 dag-pb for chunked files > 256*1024 bytes, and base32 sha2-256 raw for files that fit in a single 256kb chunk.
I strongly recommend using a hash function that uses an internal merkle tree, rather than hashing the root created by some other chunking then merkle-encoding processes. For example, what do you think of using blake3 only regardless of the file contents or size?
My understanding is that anything encoded with dag-pb
is going to be lacking on the ability to deterministically encode a file to a checksum. This is because dag-pb
encodes something that is already in the IPLD data model. i.e. it is an encoding for DAGs. But we're not talking about DAGs. We're talking about files. Thinking in terms of an encoding for DAGs (as is dag-pb) begs the question: "How do you encode the file as a DAG?", ideally, "How do you deterministically encode the file as a DAG?". Just saying "dag-pb
it" is not enough to deterministically produce a checksum unless you explain how to deterministically encode a bytestream as a DAG, and dag-pb doesn't say how to do that, nor does any other IPFS spec. blake3 does say how to do that. unlike dag-pb, blake3 explains how to deterministically chunk the input bytestream into the leaves of a merkle tree. The dag-pb spec does not.
IMO if you avoid dag-pb and ipld entirely and just specify to blake3 the file bytes, it will avoid a lot of ambiguity and get a lot better deduplication, because instead of any given bytestream having a CID for each possible way of chunking it into the leaves of a DAG, there will be a clear deterministic process to derive a single CID for each bytestream ('just blake3 hash it').
Oh, I like that a lot. I wasn't aware blake3 raw as a CID would handle its own chunking. I will run some tests.
There was previously some concern about mainstream library support for blake3, but that may have improved since I last discussed it.
Do you know of any other projects with similar requirements moving this direction?
Do you know of any other projects with similar requirements moving this direction?
Here are some projects using exclusively blake3 for hashing
Here is an IETF-formatted specification https://github.com/BLAKE3-team/BLAKE3-IETF/blob/main/rfc-b3/draft-aumasson-blake3-00.txt
It's happening rather soon, but I'll encourage discussion of this on our monthly DSNP spec Zoom call tomorrow July 3 at 9am Pacific/12pm Eastern. Link and previous meeting recordings here: https://vimeo.com/showcase/dsnp-public-spec-meeting, if you'd like to join.
I think we can distill this feedback into a couple of proposals:
I'll extrapolate two underlying themes: (1) that using dag-pb (and quite possibly dag-cbor) starts to pull in and require a lot of dependencies from the IPLD world beyond raw hash functionality (as I found when experimenting with https://github.com/LibertyDSNP/dsnp-hash-util); (2) that there is a lot of love for Blake3 in the interdwebs (and why not). :-)
To consider: If DSNP-announced content is constrained to content less than 256Kb, dag issues can be avoided, and simple hashes can be transformed to CIDs (and vice versa) easily. This is a pretty reasonable constraint to impose on Activity Content Note and Profile content, though less so on media attachments (which are, however, not directly announced and can incorporate multiple hashes into the format).
Also worth noting that we still have the fallback of Update Announcements, which can be used to notify the network to change the anchor for a post from (say) a hash to a CID, or from one hash algorithm to another.
A synthesized proposal (possibly one that makes no one happy, but worth a shot):
Corollary question -- do we need to support blake2b, or do we act fast and go for blake3 instead?
Personally I don’t know of a strong reason to support BLAKE-2b in addition to BLAKE3 + SHA
Video here: https://vimeo.com/showcase/11090945/video/980924182 (discussion begins at about 19:54)
Overview of current DIP pull request diff
Discussion of proposals noted above in this thread
Discussion of usage trends of blake3 in the IPFS community
Discussion of DSNP design goals and needs:
Discussion of "mainnet" IPFS's underlying 2MB maximum chunk size (though default produced with many tools is 256KB)
raw
and multihashes can be transformed into each other without difficulty and hosted on IPFSdag-pb
and dag-cbor
. The hash used in the CID becomes the root of a tree of hashes for the chunks.Additional points raised:
As a summary, we are trying to: 1) Provide a cryptographically strong permanent content integrity hash for a resource 2) Enable a resource to be found on a distributed file system (DFS) without binding the content to a specific DNS name or IP address 3) Avoid endorsing (at the protocol level) a particular DFS 4) Allow DSNP-implementing systems to define which DFSs they use; in particular, enable Frequency to continue to work with IPFS "mainnet", but with an eye toward forward compatibility with next-generation DFSs. 5) Keep URLs from being written to the consensus system (in particular, for profileResources and Parquet batches)
From the discussions I have come to a somewhat reluctant conclusion:
dag-pb
brings a number of otherwise unneeded dependencies into DSNP-supporting clientsblake3
raw
CIDv1 adds no descriptive value over a blake3
multihash, other than signaling that the file is available on a DFS that uses CIDs.raw
indicator).So, I am going to update my proposal to:
blake2b-256
and add blake3
in Supported Hashing Algorithms (sha2-256 will be retained)Have you considered digging into https://DID-DHT.com ? It’s based on the BitTorrent Kademlia Mainline DHT and PKARR Public Key Addressable Resource Records (sovereign TLDs) https://github.com/Pubky/pkarr. DID-DHT has an index type registry https://did-dht.com/registry/#indexed-types that are not also private and discoverable but also based on opened linked data vocabularies that can be discovered by SPARQL RDF Query tools like Comunica https://bit.ly/json-ld-query and other JavaScript libraries https://rdf.js.org. When a DID-DHT resolves to a DWN you can host and control your own data and they can message one another as well. You can run a DWN Server anywhere and they can sync your data. https://codesandbox.io/p/github/TBD54566975/dwn-server/main Since DIDs are part of the Verifiable Credential spec that can also be Verified and in return could create a Verifiable WebTorrent network much like PeerTube. There are a ton of features packed in this little DID spec.
Here is the GitHub repository, I'd be interested in your feedback here. https://github.com/TBD54566975/did-dht
Hi @mfosterio that looks interesting. I'm coming in cold so apologies for any misunderstandings. Am I right to think of PKARR as providing similar functionality for Mainline DHT as the IPNS over DNS approach does for IPFS? So then did:dht is a way of wrapping PKARR in DIDs so standard resolvers can be built?
How do you think this could intersect with this DSNP issue/question which at this point is how can we use a hash (or perhaps some other non-human-readable identifier) to find a particular resource across various distributed file systems?
Yes its goal is to be a peer to peer distributed addressing system in a BitTorrent Mainline DHT. TBD is working on a web echo system where it will plug right into a DIF DWN Decentralized Web Node as a resource that can be spun up anywhere and synced.
https://identity.foundation/decentralized-web-node/spec/
https://github.com/TBD54566975/dwn-server
https://codesandbox.io/p/github/TBD54566975/dwn-server/main
They are aiming to build a developer framework that allows anyone to implement any protocol to solve a problem in the DWeb space. It's worth looking into and see if it helps any of your technical goals.
A lot of your previous technical discussion around CIDs and IPLD is handled in the DAG CBOR and discussed in messaging section of the DWN Spec https://identity.foundation/decentralized-web-node/spec/#messages but they do message one another on a did:dht, did:key, did:web, or any other did:method. This project implementation is evolving at TBD so some of it may change with implementation feedback.
The goal is to address the resources in several different ways and if one resource goes down the other retains the messaging. A DWN references all the methods it can be messaged by and its permissions.
A property of a did is that you want the retention addressing to be retained in a peer to peer system. Mainline DHTs are not retained indefinitely so they are currently working on a retention challenge https://did-dht.com/#term:retention-challenges to maintain the retention set https://did-dht.com/#term:retention-set and republishing the resource address bindings https://did-dht.com/#republishing-data.
One thing to note, is that it's not a good idea to publicly broadcast an IPFS CID or "PIN" it to be distributed on peers you don't know on a social network. If a user posts something by accident and wants to remove it, the IPFS network poses challenges to reel unwanted content back in.
A good quality about the Mainline DHT and DWNs is that it offers a limited duration in the DHT(approximately 2 hours). These are better scenarios for dereferencing and removing content in distributed systems. You can republish the references you are sure you want to keep https://did-dht.com/#republishing-data and back them up to IPFS and reference them in your DWN with permissions for you to read only, this scenario will allow you to rebuild the content as long as you have the CID references to the IPFS files backed up locally. These are options I'm outlining on my LDUX concept.
Good qualities about Mainline DHT are the following: It has a proven track record of 15 years. It is the biggest DHT in existence with an estimated 10 million servers. It retains data for multiple hours at no cost. It has been implemented in most languages and is stable.
Hi @mfosterio, thanks for the additional info.
In most cases we do want semi-permanent pinning with DSNP. However, it's definitely worth looking at mutable addressing options (IPNS variants or BEP46); for something like a user profile document, they provide a great way to reduce consensus system transactions. I'm hesitant to suggest that the spec mandates specific distributed file systems that must be supported in order to work with DSNP, though. Similarly, key management at social media user scale becomes a challenge. Overall though I think it would be beneficial to understand if DWNs can play a role in the architecture.
I updated @dsnp/hash-util to align with the current proposed change (sha256 and blake3 hashes, base32 only) and minimized its dependencies so it doesn't require IPLD or indeed any multiformats libraries. (The code is fairly trivial, in fact, but it's nice to have a reference implementation.)
I think we need to decide whether a hash-only solution (no URLs) is sufficient when looking at profileResources
and batch announcements. (As it stands in the current proposal, individual announcements still have URLs and content hashes.) There is concern that a mere hash does not provide enough information about where a client can go to find the content, and possibly even a CID assumed to be on IPFS does not do this in a way that meets expectations around latency.
The options as I see it:
profileResources
, we would need to add a provider identifier (and consider options for non-delegated usage); batch announcements retain sender info. Other ideas?
sha2-256 or blake3 multihash as byte array (change blake2b→blake3) does not use CIDv1 (no change) Serialized in 0x{hex} form when forming a DSNP Content URI (no change)
sha2-256 and/or blake3 multihash, encoded using base32 only; other multihashes allowed but not required (change blake2b→blake3, narrow to base32)
CIDv1 using raw codec and sha2-256 or blake3 hash, serialized as base32 Maximum file size per resource type; Activity Content Profile max of 256Kb
Specification wording proposal adds clarity that URL is a suggested but not only possible location
However, to do so, we should consider naming the address field within the Avro object more generically, and specify the format alongside the resource type enum so that consuming applications can correctly parse not only CIDs but other formats that may be specified in the future.
alternatively, require the value of the field to be a URI, not some other binary whose semantics must be known a priori.
then you can use CIDs with an ipfs:
or dweb:
URI scheme, but also can use the uri scheme to evolve to other URI Schemes like RFC6920, did:
etc.
Abstract
We propose treating the
url
field in DSNP announcements as a hint only, and allow consumers to treat as valid any content that matches thehash
field.We normalize usage of hashes throughout the specification to use the base32-encoded multihash string format.
Motivation
DSNP announcements reference off-chain content using a URL and hash. Current instructions for content consumers are to retrieve a content file using its URL, and then verify its hash. These instructions tie the content to a particular retrieval location via HTTPS. If that location (hostname or IP address) is unavailable, temporarily or permanently, there is no sanctioned means of retrieving the content.
We want to make DSNP more robust by making it possible for consumers to find the relevant data on other filesystems where data can be cached, replicated and distributed. This allows service providers to optimize for higher content availability and (potentially) lower latency, and crucially, for users to self-host their own content as an alternative or backup to hosting by external service providers.
Specification Pull Request
280
Rationale
The
url
still provides a useful function and will often be the original and quickest way to retrieve the desired content, so it remains.Backwards Compatibility
url
usage can remain the same, but applications can now treaturl
as a suggestion or hint as to where to find the content matching the hash.The ability to update the
url
of an announcement (for those announcement types that support Update Announcements) is unaffected, because the DSNP Content URI of an announcement is based on thehash
field.Reference Implementation and/or Tests
TBD
Security Considerations
Merely retrieving a file by its content address (CID) does not necessarily mean that its hash is guaranteed to match. Consumers should ensure that the actual CID matches (as by recalculating it from the retrieved data, though this is often done by lower-level libraries).
Dependencies
None
References
Copyright
Copyright and related rights waived via CC0.