ipfs-inactive / faq

[ARCHIVED] DEPRECATED, please use https://discuss.ipfs.io! Frequently Asked Questions
164 stars 11 forks source link

What is typical Time To First Byte latency? #46

Closed SpiritQuaddicted closed 7 years ago

SpiritQuaddicted commented 9 years ago

When hosting big files latency is not an issue, but a use case often advertised is hosting one's personal page or lots of small files. How does IPFS handle that on and how "fun" is it?

Let's say I request 10 files that make a webpage, some HTML, some CSS, some images. As I understand it each hash is first asked in the DHT, the DHT returns block hashes, we then ask the DHT for peers that have those blocks, then we ask those peers to send them to us, etc. This piles up. How much worse will it typically be compared to the same webpage hosted on the normal web.

A related question is how much overhead in terms of bandwidth is used?

jbenet commented 9 years ago

Yeah this is an important but also a very long-to answer question. I dont have time right now to give you a full treatment (it might take multiple papers' worth), but i can point you in the right direction to understand how we're approaching this. I'm also just going to stream thoughts, so forgive the unorganized mess. It's the sort of simple question that unravels a massive iceberg.

TL;DR: You have to sink yourself deep into the merkle-dag model to understand why IPFS is actually much faster than the traditional web, even though initial seeks may be slower. and regardless, the performance today is nothing compared to what it can be, when it leverages all the properties correctly. we're just starting somewhere and improving from there. Think of IPFS as the web over git and bittorrent.


There's many things at play here:

Ok this rant has gone on long enough.

To get back your question, can lots of little files over a DHT ever be faster than directly streaming them from one source?. Of course! But you have to understand why, and how, we'll get there. You have to leverage immutability, cryptographic protocols, true transports agnosticism, and routing protocols, to let the content be stored and moved in "the best" way possible, which varies dramatically depending on what the use cases, the devices, and the networks look like. We don't start at the max today of course, we start with something much slower (a regular kademlia DHT), but we liberate the data model from the network protocols, and allow improvement to happen. We introduce a layer in between (the merkle dag and the IPFS data model) to create an interface that applications can rely on, independent of the network protocols, and then we let computers assemble themselves in whatever networks they want, and have everything work exactly the same. The protocols are thus freed to improve to match their use cases.

Sound familiar? yes, it's the IP hourglass story all over again. Over the last decades, we broke the end to end principle. But the good news is IPFS is here to fix it.

flyingzumwalt commented 7 years ago

This issue was moved to https://discuss.ipfs.io/t/what-is-typical-time-to-first-byte-latency/445