ipfs-inactive / faq

[ARCHIVED] DEPRECATED, please use https://discuss.ipfs.io! Frequently Asked Questions
164 stars 11 forks source link

What is better? Large containers or large sets of files? #250

Closed ghost closed 7 years ago

ghost commented 7 years ago

Hello everyone! I am wondering whether it's better to share one big dump (e.g. latest english Wikipedia dump as one file) or may small files which constitute the dump file.

Arguments for big dumps: performance: one hash in the network.

Arguments for small files: gnome-4.5.tar.gz is already in the network, wouldn't be deduplicated if a Linux-ISO containing the file would be introduced, too. Only new additions need new storage space.

This is similar to the dichotomy of Debian's way to link packages if possible (both reducing disk space/RAM requirements) opposed to Docker images.

What do you think?

hsanjuan commented 7 years ago

one hash in the network.

ipfs will chunk big files so you will have many hashes anyway.

flyingzumwalt commented 7 years ago

This issue was moved to https://discuss.ipfs.io/t/what-is-better-large-containers-or-large-sets-of-files/263