quinn-rs / quinn

Async-friendly QUIC implementation in Rust
Apache License 2.0
3.82k stars 390 forks source link

any benchmark against cloudflare quiche / lsquic and https://github.com/mozilla/neqo? #1320

Open hiqsociety opened 2 years ago

hiqsociety commented 2 years ago

any benchmark against cloudflare quiche / lsquic and https://github.com/mozilla/neqo?

djc commented 2 years ago

We have some benchmarks in the repository but I'm not aware of any in-depth comparisons to other implementations. It's possible the interop runner has some basic benchmarks though.

dzvon commented 2 years ago

You may want to see this, https://interop.seemann.io/

nmittler commented 1 year ago

Just came across this (@dzvon thanks for the link). At first glance, it appears that many implementations (e.g. quic-go, quiche) show significantly higher throughput.

@djc any thoughts?

Ralith commented 1 year ago

What numbers are you looking at specifically? We haven't had the resources to keep that image very up to date, but at a glance it looks like even the old build that's up there is reasonably competitive.

djc commented 1 year ago

I don't have any comments, no. However, right now @aochagavia is doing a bunch of performance-oriented work. Quinn likely just needs more investment on the performance front, which is unlikely to come from @Ralith or me in the current situation. Obviously we're always happy to provide guidance to folks interested in benchmarking and improving performance.

Don't forget, too, that Quinn is mainly maintained by two volunteers whereas some of these other implementations have had much more time spent on them.

nmittler commented 1 year ago

@Ralith

What numbers are you looking at specifically?

Under the measurement section, just picking one out of a hat:

quic-go->quiche ~ 9443 kbps quic-go->quinn ~ 7943 kbps

So around 16% lower throughput for quinn.

For the same row, all of the following servers show > 9000 kbps: quic-go, ngtcp2, quiche, picoquic, aioquic, nginx, xquic, lsquic, haproxy, s2n-quic (so 10 out of 16, with 2 that failed that particular test)

nmittler commented 1 year ago

@djc understood, thanks. Maybe we can at least update the image so we know the current state of things?

Ralith commented 1 year ago

quic-go->quiche ~ 9443 kbps quic-go->quinn ~ 7943 kbps

Ah, I see. If I understand the test, it looks like that benchmark is showing quinn performing competitively as a client (receiver), but a bit slow as a server (sender). If the issue still exists with current versions, https://github.com/quinn-rs/quinn/pull/1543 might be a significant help.

Ralith commented 1 year ago

Actually, PMTUD (#1510) might already fully account for this discrepancy, since it's a sender-side optimization that should give 1452/1200 = ~21% more bandwidth at a given packet rate (with per-packet costs likely dominating).

nmittler commented 1 year ago

Ah nice! @Ralith would it be possible to update the image used by https://interop.seemann.io/ to see the new results?

moranno commented 1 year ago

Ah nice! @Ralith would it be possible to update the image used by https://interop.seemann.io/ to see the new results?

It seems that the current performance of quinn is still at the bottom. Did you know that their latest tests use the latest version of quinn?

edit:

Just found the rusult, the test is still using the older version of quinn which define here: https://github.com/marten-seemann/quic-interop-runner/blob/master/implementations.json

https://hub.docker.com/r/stammw/quinn-interop/tags

djc commented 1 year ago

A while back I worked on an upgrade to the quinn-interop repo, just turned it into a PR:

https://github.com/quinn-rs/quinn-interop/pull/3

If anyone was able to test that this stuff actually works, we can maybe update it upstream?

Ralith commented 1 year ago

Yeah, we should update this. I've never managed to get docker to work right on my machine, unfortunately.