kazu-yamamoto / http2

HTTP/2.0 library including HPACK
BSD 3-Clause "New" or "Revised" License
86 stars 22 forks source link

Improve streaming performance #50

Closed epoberezkin closed 1 year ago

epoberezkin commented 1 year ago

Streaming bandwidth is hugely impacted by the latency in both directions - it seems that the library waits for the confirmation from the remote peer before sending the next frame, so even with the low latency it could be much faster, but with high latency (~100ms) it becomes much much slower.

For example, when I am uploading 1gb file across 6 different servers (3 with the PING latency of ~100 ms and 3 with the latency of ~10ms), the first half gets uploaded in roughly 1 min and the second half takes ~10min to upload, even though the available network bandwidth is the same.

The numbers are close but don't exactly fit, if it was doing exactly what I described it would take 100ms to upload 16kb, but 1gb / 16kb * 100ms / 6 = ~1000sec - about 15 min, so maybe it just sends one frame pre-emptively? Or maybe average latency in established TCP connection is lower than PING latency? In which case the numbers do fit, probably.

If this guess is correct, the solution could be to send the chunks pre-emptively, without waiting for the confirmation from the remote peer, either up to a certain number of chunks (but it's difficult to decide what should be the number, as the ratio of latency bandwidth is unknown) or, better, pause sending when any of the sent chunks was not confirmed within say 2 or 4 sec (quite possible roundtrip latency), or some other approach, and tracking send/response time and marking them as confirmed in some sort of a map. We considered a similar solution to improve a streaming bandwidth of SMP server messages, but it is not important there right now.

I think streaming bandwidth is the most important thing to improve in the library.

kazu-yamamoto commented 1 year ago

This is probably due to the poor implementation of flow control. To verify this, I will add an option to disable the flow control.

epoberezkin commented 1 year ago

@kazu-yamamoto Thank you!

I tested 4.1 - https://github.com/simplex-chat/simplexmq/pull/680

Great news - nothing is broken :)

Performance of upload to localhost didn't seem to change too much, there is a variance between test runs and if there is any difference it is within this variance (didn't do a proper benchmark over many runs though)...

Uploading to the remote servers (new client, old servers) seems the same, or maybe a bit slower, but probably it is the variance.

Downloading from the remote servers (new client, old servers) is 1.5-3x faster, there is some large variance between the runs, not sure yet what causes it, maybe just Internet...

I am yet to test with new remote server to see the difference.

Possibly, I don't understand how to use the option to disable flow control (or is it not an option)?

kazu-yamamoto commented 1 year ago

The HTTP/2 flow control is disabled. There is no option to enable it since I don't understand the relation between TCP flow control vs HTTP/2 flow control. The current policy is to just rely on TCP 100%.

kazu-yamamoto commented 1 year ago

Great news - nothing is broken :)

Does this mean that #51 is fixed?

epoberezkin commented 1 year ago

No, it doesn’t mean that - I still need to test that with remote servers :) Will report tomorrow.

epoberezkin commented 1 year ago

It seems that #51 no longer happens, closed! Thank you!

Performance is much better will the new code on both sides - ~5x faster upload, 3x faster download.

It seems it can be improved a lot though (particularly download), as it depends on the latency very much still.

kazu-yamamoto commented 1 year ago

Closing.