Closed achingbrain closed 4 years ago
May I have some more information about your use case and why it's slower please?
Superseded by https://github.com/ipfs/js-ipfs-unixfs-importer/pull/48
In https://github.com/ipfs/js-ipfs-mfs/pull/73 I'm piping the unixfs-exporter
into the unixfs-importer
to walk a UnixFS graph and update metadata. I don't want to reimport all of the files (the bit about it being slower), just update the metadata of the root nodes of each UnixFS entry.
The import pipeline is something like:
chunk -> create-file/dir-dag -> create-dir-tree
I don't need to chunk or create files (the expensive bit), just do the directory tree bit at the end, including having things be sharded if they are really big and that's what this (or #48) allows.
Sometimes you just want to do some DAG manipulations, you don't necessarily want to chunk and create files.
This PR allows you to pass
DAGNode
s as.content
for entries being imported, so you can combine theunixfs-exporter
andunixfs-importer
as a tree walker do fun things with metadata at a reasonable speed.Also removes the
multihashes
dep in favour of the one exported bymultihashing-async
.