encode / httpx

A next generation HTTP client for Python. 🦋
https://www.python-httpx.org/
BSD 3-Clause "New" or "Revised" License
13.09k stars 834 forks source link

Improve async performance. #3215

Open MarkusSintonen opened 4 months ago

MarkusSintonen commented 4 months ago

There seems to be some performance issues in httpx (0.27.0) as it has much worse performance than aiohttp (3.9.4) with concurrently running requests (in python 3.12). The following benchmark shows how running 20 requests concurrently is over 10x slower with httpx compared to aiohttp. The benchmark has very basic httpx usage for doing multiple GET requests with limited concurrency. The script outputs a figure showing how duration of each GET request has a huge duration variance with httpx.

Figure_1

# requirements.txt:
# httpx == 0.27.0
# aiohttp == 3.9.4
# matplotlib == 3.9.0
# 
# 1. start server: python bench.py server
# 2. run client test: python bench.py client

import asyncio
import sys
from typing import Any, Coroutine, Iterator
import aiohttp
import time
import httpx
from aiohttp import web
import matplotlib.pyplot as plt

PORT = 1234
URL = f"http://localhost:{PORT}/req"
RESP = "a" * 2000
REQUESTS = 100
CONCURRENCY = 20

def run_web_server():
    async def handle(_request):
        return web.Response(text=RESP)

    app = web.Application()
    app.add_routes([web.get('/req', handle)])
    web.run_app(app, host="localhost", port=PORT)

def duration(start: float) -> int:
    return int((time.monotonic() - start) * 1000)

async def run_requests(axis: plt.Axes):
    async def gather_limited_concurrency(coros: Iterator[Coroutine[Any, Any, Any]]):
        sem = asyncio.Semaphore(CONCURRENCY)
        async def coro_with_sem(coro):
            async with sem:
                return await coro
        return await asyncio.gather(*(coro_with_sem(c) for c in coros))

    async def httpx_get(session: httpx.AsyncClient, timings: list[int]):
        start = time.monotonic()
        res = await session.request("GET", URL)
        assert len(await res.aread()) == len(RESP)
        assert res.status_code == 200, f"status_code={res.status_code}"
        timings.append(duration(start))

    async def aiohttp_get(session: aiohttp.ClientSession, timings: list[int]):
        start = time.monotonic()
        async with session.request("GET", URL) as res:
            assert len(await res.read()) == len(RESP)
            assert res.status == 200, f"status={res.status}"
        timings.append(duration(start))

    async with httpx.AsyncClient() as session:
        # warmup
        await asyncio.gather(*(httpx_get(session, []) for _ in range(REQUESTS)))

        timings = []
        start = time.monotonic()
        await gather_limited_concurrency((httpx_get(session, timings) for _ in range(REQUESTS)))
        axis.plot([*range(REQUESTS)], timings, label=f"httpx (tot={duration(start)}ms)")

    async with aiohttp.ClientSession() as session:
        # warmup
        await asyncio.gather(*(aiohttp_get(session, []) for _ in range(REQUESTS)))

        timings = []
        start = time.monotonic()
        await gather_limited_concurrency((aiohttp_get(session, timings) for _ in range(REQUESTS)))
        axis.plot([*range(REQUESTS)], timings, label=f"aiohttp (tot={duration(start)}ms)")

def main(mode: str):
    assert mode in {"server", "client"}, f"invalid mode: {mode}"

    if mode == "server":
        run_web_server()
    else:
        fig, ax = plt.subplots()
        asyncio.run(run_requests(ax))
        plt.legend(loc="upper left")
        ax.set_xlabel("# request")
        ax.set_ylabel("[ms]")
        plt.show()

    print("DONE", flush=True)

if __name__ == "__main__":
    assert len(sys.argv) == 2, f"Usage: {sys.argv[0]} server|client"
    main(sys.argv[1])

I found the following issue but seems its not related as the workaround doesnt make a difference here https://github.com/encode/httpx/issues/838#issuecomment-1291224189

MarkusSintonen commented 4 months ago

Found some related discussions:

Opening a proper issue is warranted to get better visibility for this. So the issue is easier to find for others. In its current state httpx is not a good option for highly concurrent applications. Hopefully the issue gets fixed as otherwise the library is great, so thanks for it!

tomchristie commented 3 months ago

Oh, interesting. There's some places I can think of where we might want to be digging into here...

Possibly points of interest here...

Also, the tracing support in both aiohttp and in httpx are likely to be extremely valuable to us here.

MarkusSintonen commented 3 months ago

Thank you for the good points!

A comparison of performance against a remote server would be more representative than performance against localhost.

My original benchmark hit AWS S3. There I got very similar results where httpx had a huge variance with requests timings with concurrent requests. This investigation was due to us observing some strange requests durations when servers were under heavy load in production. For now we have switched to aiohttp and it seems to have fixed the issue.

tomchristie commented 3 months ago

My original benchmark hit AWS S3. There I got very similar results [...]

Okay, thanks. Was that also testing small GET requests / similar approach to above?

MarkusSintonen commented 3 months ago

Okay, thanks. Was that also testing small GET requests / similar approach to above?

Yes pretty much, GET of a file with size of a couple KB. In the real system the sizes ofcourse vary alot.

MarkusSintonen commented 3 months ago

We're currently using anyio for our async support. We did previously have a native asyncio backend, there might be some CPU overhead to be saved here, which in this localhost case might be outweighing network overheads.

@tomchristie you were right, this is the issue ^!

When I just do a simple patch into httpcore to replace anyio.Lock with asyncio.Lock the performance improves greatly. Why does httpcore use AnyIO there instead of asyncio? Seems AnyIO may have some issues.

With asyncio: asyncio

With anyio: anyio

MarkusSintonen commented 3 months ago

There is another hot spot in AsyncHTTP11Connection.has_expired which is called eg from AsyncConnectionPool heavily. This checks the connection status via this is_readable logic. That seems to be a particularly heavy check.

The logic in connection pool is quite heavy as it rechecks all of the connections every time requests are assigned to the connectors. It might be possible to skip the is_readable checks in the pool side if we just take a connector from the pool and take another if the picked connector was actually not healthy. Instead of checking them all every time. What do you think?

Probably it would be good idea to add some performance tests to httpx/httpcore CI.

MarkusSintonen commented 3 months ago

I can probably help with a PR if you give me pointers about how to proceed :)

I could eg replace the synchronization primitives to use the native asyncio.

tomchristie commented 3 months ago

Why does httpcore use AnyIO there instead of asyncio?

See https://github.com/encode/httpcore/issues/344, https://github.com/encode/httpx/discussions/1511, and https://github.com/encode/httpcore/pull/345 for where/why we switched over to anyio.

I can probably help with a PR if you give me pointers about how to proceed

A good first pass onto this would be to add an asyncio.py backend, without switching the default over.

You might want to work from the last version that had an asyncio native backend, although I think the backend API has probably changed slightly.

Docs... https://www.encode.io/httpcore/network-backends/


Other context...

MarkusSintonen commented 3 months ago

Thanks @tomchristie

What about this case I pointed:

When I just do a simple patch into httpcore to replace anyio.Lock with asyncio.Lock the performance improves greatly

There switching network backend won't help as the lock is not defined by the network implementation. The lock implementation is a global one. Should we just change the synchronization to use asyncio?

MarkusSintonen commented 3 months ago

I'm able to push the performance of httpcore to be exactly on par with aiohttp: new

Previously (in httpcore master) the performance is not great and the latency behaves very randomly: old

You can see the benchmark here.

Here are the changes. There are 3 things required to improve the performance to get it as fast as aiohttp (in separate commits):

  1. Commit 1. Change synchronization primitives (in _synchronization.py) to use asyncio and not anyio
  2. Commit 2. Bringing back asyncio-based backend which was removed in the past (AsyncIOStream)
  3. Commit 3. Optimize the AsyncConnectionPool to avoid calling the socket poll every time the pool is used. Also fixing idle connection checking to have lower time complexity for it

I'm happy to open a PR from these. What do you think @tomchristie?

tomchristie commented 3 months ago

@MarkusSintonen - Nice one. Let's work through those as individual PRs.

Is it worth submitting a PR where we add a scripts/benchmark?

MarkusSintonen commented 3 months ago

Is it worth submitting a PR where we add a scripts/benchmark?

I think it would be beneficial to have benchmark run in CI so we would see the difference. Previously I have contributed to Pydantic and they use codspeed. That outputs benchmark diffs to PR when the benchmarked behaviour changes. It should be free for open-source projects.

tomchristie commented 3 months ago

That's an interesting idea. I'd clearly be in agreement with adding a scripts/benchmark. I'm uncertain on if we'd want the extra CI runs everytime or not. Suggest proceeding with the uncontroversial progression to start with, and then afterwards figure out if/how to tie it into CI. (Reasonable?)

MarkusSintonen commented 3 months ago

@tomchristie I have now opened the 2 fix PRs:

Maybe Ill open the network backend addition after these as its the most complex one.

HuiDBK commented 3 months ago

Maybe you can refer to the implementation of aiohttp https://docs.aiohttp.org/en/stable/http_request_lifecycle.html#why-is-aiohttp-client-api-that-way https://stackoverflow.com/questions/78516655/httpx-vs-requests-vs-aiohttp

rafalkrupinski commented 2 months ago

Isn't usage of http.CookieJar a part of the problem?

https://github.com/encode/httpx/blob/db9072f998b53ff66d50778bf5edee8e2cc8ede1/httpx/_models.py#L1020

https://github.com/python/cpython/blob/68e279b37aae3019979a05ca55f462b11aac14be/Lib/http/cookiejar.py#L1266

MarkusSintonen commented 2 months ago

Isn't usage of http.CookieJar a part of the problem?

@rafalkrupinski I haven't run benchmarks when requests/responses uses cookies but atleast it doesnt cause performance issues in general. I run similar benchmarks from httpcore side with httpx. Performance is at similar levels as with aiohttp and urllib3 when using the performance fixes from the PRs:

(Waiting for review from @tomchristie)

Async (httpx vs aiohttp): async

Sync (httpx vs urllib3): sync

rafalkrupinski commented 2 months ago

TBH I'm surprised by httpx ditching anyio. Sure anyio comes with performance overhead, but this is breaking compatibility with Trio.

MarkusSintonen commented 2 months ago

TBH I'm surprised by httpx ditching anyio. Sure anyio comes with performance overhead, but this is breaking compatibility with Trio.

I'm not aware of it ditching it completely. It will still support using it, it's just optional. Trio will be also supported by httpcore.

rafalkrupinski commented 2 months ago

@rafalkrupinski I haven't run benchmarks when requests/responses uses cookies but atleast it doesnt cause performance issues in general

These are really cool speed-ups. Can't wait for httpx to overtake aiohttp ;)

tirkarthi commented 2 months ago

Since the benchmark seems to be using http I think below is also a related issue where creation of ssl context in httpx had some overhead compared to aiohttp.

Ref : https://github.com/encode/httpx/issues/838

tyteen4a03 commented 3 weeks ago

Hi, any movements on the PRs? We're having to use both aiohttp and httpx in our project because of this reason, whereas we'd like to only have 1 set of API.

HuiDBK commented 3 weeks ago

Hi, any movements on the PRs? We're having to use both aiohttp and httpx in our project because of this reason, whereas we'd like to only have 1 set of API.

I use aiohttp to encapsulate a chain call method, which I personally feel is pretty good.

url = "https://juejin.cn/"
resp = await AsyncHttpClient().get(url).execute()
# json_data = await AsyncHttpClient().get(url).json()
text_data = await AsyncHttpClient(new_session=True).get(url).text()
byte_data = await AsyncHttpClient().get(url).bytes()

example:https://github.com/HuiDBK/py-tools/blob/master/demo/connections/http_client_demo.py