Open MarkusSintonen opened 4 months ago
Found some related discussions:
Opening a proper issue is warranted to get better visibility for this. So the issue is easier to find for others. In its current state httpx
is not a good option for highly concurrent applications. Hopefully the issue gets fixed as otherwise the library is great, so thanks for it!
Oh, interesting. There's some places I can think of where we might want to be digging into here...
requests
compared against httpx
, with multithreaded requests.Possibly points of interest here...
aiohttp
? Are we sending simple GET requests across more than one TCP packet unneccessarily, either due to socket options or due to our flow in writing the request to the stream, or both? Eg. see https://brooker.co.za/blog/2024/05/09/nagle.htmlh11
for our HTTP construction and parsing. This is the best python option for careful spec correctness, tho it has more CPU overhead than eg. httptools
.anyio
for our async support. We did previously have a native asyncio
backend, there might be some CPU overhead to be saved here, which in this localhost case might be outweighing network overheads.aiohttp
currently supports DNS caching where httpx
does not, although not relevant in this particular case.Also, the tracing support in both aiohttp and in httpx are likely to be extremely valuable to us here.
Thank you for the good points!
A comparison of performance against a remote server would be more representative than performance against localhost.
My original benchmark hit AWS S3. There I got very similar results where httpx
had a huge variance with requests timings with concurrent requests. This investigation was due to us observing some strange requests durations when servers were under heavy load in production. For now we have switched to aiohttp
and it seems to have fixed the issue.
My original benchmark hit AWS S3. There I got very similar results [...]
Okay, thanks. Was that also testing small GET
requests / similar approach to above?
Okay, thanks. Was that also testing small
GET
requests / similar approach to above?
Yes pretty much, GET of a file with size of a couple KB. In the real system the sizes ofcourse vary alot.
We're currently using anyio for our async support. We did previously have a native asyncio backend, there might be some CPU overhead to be saved here, which in this localhost case might be outweighing network overheads.
@tomchristie you were right, this is the issue ^!
When I just do a simple patch into httpcore
to replace anyio.Lock with asyncio.Lock
the performance improves greatly. Why does httpcore
use AnyIO there instead of asyncio? Seems AnyIO may have some issues.
With asyncio
:
With anyio
:
There is another hot spot in AsyncHTTP11Connection.has_expired
which is called eg from AsyncConnectionPool
heavily. This checks the connection status via this is_readable logic. That seems to be a particularly heavy check.
The logic in connection pool is quite heavy as it rechecks all of the connections every time requests are assigned to the connectors. It might be possible to skip the is_readable
checks in the pool side if we just take a connector from the pool and take another if the picked connector was actually not healthy. Instead of checking them all every time. What do you think?
Probably it would be good idea to add some performance tests to httpx/httpcore CI.
I can probably help with a PR if you give me pointers about how to proceed :)
I could eg replace the synchronization primitives to use the native asyncio.
Why does httpcore use AnyIO there instead of asyncio?
See https://github.com/encode/httpcore/issues/344, https://github.com/encode/httpx/discussions/1511, and https://github.com/encode/httpcore/pull/345 for where/why we switched over to anyio.
I can probably help with a PR if you give me pointers about how to proceed
A good first pass onto this would be to add an asyncio.py
backend, without switching the default over.
You might want to work from the last version that had an asyncio
native backend, although I think the backend API has probably changed slightly.
Docs... https://www.encode.io/httpcore/network-backends/
Other context...
Thanks @tomchristie
What about this case I pointed:
When I just do a simple patch into httpcore to replace anyio.Lock with asyncio.Lock the performance improves greatly
There switching network backend won't help as the lock is not defined by the network implementation. The lock implementation is a global one. Should we just change the synchronization to use asyncio?
I'm able to push the performance of httpcore
to be exactly on par with aiohttp
:
Previously (in httpcore
master) the performance is not great and the latency behaves very randomly:
You can see the benchmark here.
Here are the changes. There are 3 things required to improve the performance to get it as fast as aiohttp
(in separate commits):
_synchronization.py
) to use asyncio
and not anyio
asyncio
-based backend which was removed in the past (AsyncIOStream
)AsyncConnectionPool
to avoid calling the socket poll every time the pool is used. Also fixing idle connection checking to have lower time complexity for itI'm happy to open a PR from these. What do you think @tomchristie?
@MarkusSintonen - Nice one. Let's work through those as individual PRs.
Is it worth submitting a PR where we add a scripts/benchmark
?
Is it worth submitting a PR where we add a scripts/benchmark?
I think it would be beneficial to have benchmark run in CI so we would see the difference. Previously I have contributed to Pydantic and they use codspeed. That outputs benchmark diffs to PR when the benchmarked behaviour changes. It should be free for open-source projects.
That's an interesting idea. I'd clearly be in agreement with adding a scripts/benchmark
. I'm uncertain on if we'd want the extra CI runs everytime or not. Suggest proceeding with the uncontroversial progression to start with, and then afterwards figure out if/how to tie it into CI. (Reasonable?)
@tomchristie I have now opened the 2 fix PRs:
Maybe Ill open the network backend addition after these as its the most complex one.
Maybe you can refer to the implementation of aiohttp https://docs.aiohttp.org/en/stable/http_request_lifecycle.html#why-is-aiohttp-client-api-that-way https://stackoverflow.com/questions/78516655/httpx-vs-requests-vs-aiohttp
Isn't usage of http.CookieJar a part of the problem?
https://github.com/encode/httpx/blob/db9072f998b53ff66d50778bf5edee8e2cc8ede1/httpx/_models.py#L1020
Isn't usage of http.CookieJar a part of the problem?
@rafalkrupinski I haven't run benchmarks when requests/responses uses cookies but atleast it doesnt cause performance issues in general. I run similar benchmarks from httpcore
side with httpx
. Performance is at similar levels as with aiohttp
and urllib3
when using the performance fixes from the PRs:
(Waiting for review from @tomchristie)
Async (httpx vs aiohttp):
Sync (httpx vs urllib3):
TBH I'm surprised by httpx ditching anyio. Sure anyio comes with performance overhead, but this is breaking compatibility with Trio.
TBH I'm surprised by httpx ditching anyio. Sure anyio comes with performance overhead, but this is breaking compatibility with Trio.
I'm not aware of it ditching it completely. It will still support using it, it's just optional. Trio will be also supported by httpcore.
@rafalkrupinski I haven't run benchmarks when requests/responses uses cookies but atleast it doesnt cause performance issues in general
These are really cool speed-ups. Can't wait for httpx to overtake aiohttp ;)
Since the benchmark seems to be using http I think below is also a related issue where creation of ssl context in httpx had some overhead compared to aiohttp.
Hi, any movements on the PRs? We're having to use both aiohttp and httpx in our project because of this reason, whereas we'd like to only have 1 set of API.
Hi, any movements on the PRs? We're having to use both aiohttp and httpx in our project because of this reason, whereas we'd like to only have 1 set of API.
I use aiohttp to encapsulate a chain call method, which I personally feel is pretty good.
url = "https://juejin.cn/"
resp = await AsyncHttpClient().get(url).execute()
# json_data = await AsyncHttpClient().get(url).json()
text_data = await AsyncHttpClient(new_session=True).get(url).text()
byte_data = await AsyncHttpClient().get(url).bytes()
example:https://github.com/HuiDBK/py-tools/blob/master/demo/connections/http_client_demo.py
There seems to be some performance issues in
httpx
(0.27.0) as it has much worse performance thanaiohttp
(3.9.4) with concurrently running requests (in python 3.12). The following benchmark shows how running 20 requests concurrently is over 10x slower withhttpx
compared toaiohttp
. The benchmark has very basichttpx
usage for doing multiple GET requests with limited concurrency. The script outputs a figure showing how duration of each GET request has a huge duration variance withhttpx
.I found the following issue but seems its not related as the workaround doesnt make a difference here https://github.com/encode/httpx/issues/838#issuecomment-1291224189