Closed yanxurui closed 11 months ago
Tagging subscribers to this area: @dotnet/ncl See info in area-owners.md if you want to be subscribed.
Author: | yanxurui |
---|---|
Assignees: | - |
Labels: | `area-System.Net.Http`, `tenet-performance` |
Milestone: | - |
Example usage:
BenchmarkHttp.exe -U https://localhost:5001/ -D 20 -T 1 -C 200 -H 3
which means I am hammering the server with 1 thread (T), 200 connections (C) in total using Http (H) version 3 for 20 seconds (D).
Looking at the benchmarking code, it seems to me it is doing something slightly different:
so -T 1 -C 200
is issuing 200 concurrent requests over single HttpClient. Now the actual behavior depends on the HTTP version:
This explains why HTTP/2 is faster than HTTP/1.1. HTTP/2 does not have to do the TCP+TLS handshake for each request separately but does it only once, shaving a couple of roundtrips from each request. (HTTP2 is also a binary so the request data is slightly more compact)
HTTP/3 is built on top of QUIC, which in turn is built on top of UDP. In our own benchmarks, we have also noticed that HTTP/3 is sometimes less performant than HTTP/2, and one possible reason is that UDP stacks of OSes are less optimized than TCP because historically, TCP was used more often for high-throughput traffic. QUIC is also a fairly new protocol and its implementations have not had the time to get as optimized.
We are aware of the performance gap but we are at a point where there are no more easy gains and optimisations and investigations are rather complex.
One point that is good to have in mind is that HTTP/3 was designed specifically to perform well in less-reliable networks with less-than-high bandwidth (think cellular networks). The numbers you are sharing suggest that you have a very low latency and high bandwidth link to the server, which is not the scenario where HTTP/3 was designed to shine.
Btw. in both HTTP/2 and HTTP/3 peers can impose the maximum number of concurrently opened streams (effectively capping the number of concurrent requests over the wire), the default should be 100 concurrent streams/requests for both protocol versions, which should mean that each request is spending about half the time in a waiting queue.
Thanks @rzikm . It's great to know you are already aware of the performance degradation in HTTP/3.
There is one thing that puzzled me. You said
HTTP/2 and HTTP/3 creates a single connection per host (per HttpClient instance) and multiplexes all requests over it
and
the default should be 100 concurrent streams/requests for both protocol versions
I thought when I send requests concurrently in a single thread using a single HttpClient instance in an asynchronous way, there could be multiple connections. By connection, I mean TCP connections in HTTP/2 and UDP connections in HTTP/3. There is a configuration HttpClientHandler.MaxConnectionsPerServer Property that leads me to think this way. Or maybe MaxConnectionsPerServer means the maximums number of logical streams that the 100 limit you mentioned applies to? But, the doc says the default value for this configuration is unlimited.
the 100 streams limit is imposed by the server, it's not a clientside setting. The server tells clients how many streams they can open, and if the client exceeds the limit the server terminates the connection due to a breach of protocol.
I thought when I send requests concurrently in a single thread using a single HttpClient instance in an asynchronous way, there could be multiple connections. By connection, I mean TCP connections in HTTP/2 and UDP connections in HTTP/3. There is a configuration HttpClientHandler.MaxConnectionsPerServer Property
That is true only when HTTP/1.1 is used. We should probably update the docs.
the MaxConnectionsPerServer
property applies for HTTP/1.1 where we open a new connection per request. For HTTP/2 and HTTP/3 clients SHOULD NOT (in RFC speak) open more than one connection to a server concurrently. For HTTP/2, there is SocketsHttpHandler.EnableMultipleHttp2Connections
property which can be used to override this behavior (meant to be used in server-to-server scenarios only if desired). There is no equivalent HTTP/3 setting yet.
I'll close this against the more generic #95351. We do plan a serious push for H/3 and QUIC perf in 9.0.
Description
I did a benchmark for http/1.1, http/2 and http/3. The result shows Http/3 performs a lot worse than Http/1.1 or Http/2.
Configuration
Device:
OS:
Both the server and client (benchmark tool) are running in .NET 8 on the device above.
Server code is generated by
dotnet new webapp -o KestrelService
. In order to test http/3, I added the code snippet below:Full code:
The client is a wrk like benchmark tool from here: https://gist.github.com/yanxurui/c71c9762d7f79c704d446452facfcdf8
Example usage:
which means I am hammering the server with 1 thread (T), 200 connections (C) in total using Http (H) version 3 for 20 seconds (D).
Here are the results:
Regression?
Data
Analysis