Open James-E-A opened 2 years ago
Note that this need not be limited to single Unix-files; even ye olde BitTorrent clients support webseeds for content-addressed directories:
The file/folder structure needs to be identical to the torrent.
(3) here's another suggestion in the same vein:
this <a href="https://surface.syr.edu/cgi/viewcontent.cgi?article=1846&context=honors_capstone"
integrity="sha256-nLGeuXDdJZgaa6wpAOSy/81NZ4ngkNjub0ii9tey25c=
ipfs-mAVUSIJyxnrlw3SWYGmusKQDksv/NTWeJ4JDY7m9IovbXstuX">fantastic explainer by Ashley Valentijn of Syracuse University</a>
re ipfs://bafkreie4wgpls4g5ewmbu25mfeaojmx7zvgwpcpasdmo432iul3npmw3s4
Thank you for brainstorming! Some thoughts / questions (if i missed the mark, please elaborate more):
ipfs dag export
(3) iiuc not feasible for various reasons
/ipfs/{cid}
paths or DNSLink, and ipfs-companion redirects them to local node, which does the verification natively. ps. ipfs://ipfs/{cid}
is invalid (https://github.com/ipfs/go-ipfs/pull/7930), use ipfs://{cid}
[
/ipfs/QmV…2yJ?filename=…&x-ipfs-webseed=…
] is tricky, because…data could be imported to IPFS with custom parameters (different chunks, different hash function, different DAG type); CID of data imported with default parameters would not match (including parameter space in URL is not an option, as one could create DAG by hand, outside of parameters in go-ipfs).
Torrents also have different chunking parameters that can lead to different URNs for the same exact file (or even equivalent fs-trees), and yet solved web-seeding / leeching from "dumb" HTTP servers without provisioning for said parameters in the URI scheme (and without any extra ado such as CAR files: they built the feature while imposing nothing more than a single parameter, as=${url}
, on the existing interface).
While (as with BitTorrent) active/"real"/software peers will be required to bootstrap the structure of the file (i.e. fetching non-leaf data), once the software has constructed a mapping of the chunkspace onto an fs-tree, it's then clear to pull at least the leaf nodes from HTTP servers. (Admittedly, a ~0.5MiB PDF wasn't a great prototypical example of this, since most people will be able to snap the whole file up in an instant in the same order of time it takes to fetch the metadata.)
[https://cf-ipfs.com/ipfs/QmVSSCvbYX8XHVcf2kqrpGGmH5PdCbgAP11CCJXrrJQ2yJ?filename=ssl-mitm.pdf] will work over HTTP and if ipfs-companion is present get loaded from local node
Per the OP, that (currently) breaks whenever no IPFS client happens to be seeding (pinning) the file at the moment, even if the file's canonical location is still OK. A client supporting x-ipfs-webseed
parameters would at least have the possibility not to be left in the dark should the seeding peer drop out mid-transfer.
[
<a href="…" integrity="ipfs-mAV…tuX">
is] not feasible [because] …There is no web extension API that would allow us to do this in a performant way (requires script injection on every page)
I proposed that in light of the existing linkification feature: given that linkification is seemingly deemed acceptable, I deduce the particular rubicon of touching pages isn't a blocker—checking links for a specific attribute should be far more performant than scrubbing all text nodes for excerpts that look like IPFS URIs anyway.
docs/webseed.md
to this repo for now, we can continue from there.
Is your feature request related to a problem? Please describe.
The "problem" this solves is that of wanting to hold one or most web hosts as a back up in the case of no-one seeding the file in IPFS (or all currently-seeding peers being inaccessible).
This is a use-case that, for instance, most BitTorrent clients have supported since forever — with the
as
parameter orhttpseeds
field — this functionality is colloquially known as "web seeds".Describe the solution you'd like
Two possible solutions; either/or would be great, but the first one seems way cleaner:
as
parameter in Magnet URIs)Describe alternatives you've considered
x-ipfs-path
headers to static files/ipfs
slug in the path