Closed KiraPC closed 2 years ago
I guess I figured out. Thanks to @Reikun85.
The should be caused by the tcp connection. That is why on my local I did not have the problem but on K8s yes. On the cluster there are some tcp ping from loadbalancer and so on.
So I replicated locally with tcping, and the leak appeared also on my PC.
I also tested the problem with different uvicorn version, and the leak appear from uvicorn>=0.14.0, no problem with 0.13.4.
Also noticed that the leak is present just using the "standard" version of uvicorn and not the full one. NB: the standard version is the most used when using gunicorn as a process manager to run uvicorn workers.
To replicate the problem I create an example gist, just build the img with both version, using the two provided dockerfile, than ping the application with a tcp ping tool, you can verify that app memory will grow up never stopping.
Here the gist: https://gist.github.com/KiraPC/5016ecee2ae1dd6e860b4494415dbd49
Let me know in case of more information or if something is not clear.
Next step would be to play with --loop
and --http
CLI parameters.
What tool did you use to ping? I'm not able to reproduce the issue with the gist.
I used tcping for MacOs installed via brew
I've used this: https://neoctobers.readthedocs.io/en/latest/linux/tcpping_on_ubuntu.html
I'm on Ubuntu 20.04.
Mmmm, I'm from mobile now. I used tcping in a while loop in a bash script.
@Kludex I used this https://github.com/mkirchner/tcping
Another issue https://github.com/encode/uvicorn/issues/1030 is suspected of TCP leaks.
@KiraPC, thanks for the detailed explanation on how to reproduce it! I was able to test it locally, and bisected the library to find the commit where Uvicorn started leaking memory.
The memory leak starts being reproducible on this commit: https://github.com/encode/uvicorn/commit/960d4650db0259b64bc41f69bc7cdcdb1fdbcbf3 (#869).
I tested #1192 applied to the current master
branch, and the memory leak seems to be still happening, but is notoriously slower. I will keep researching to find the root cause.
@adamantike I am very happy that he was able to help. It is the minimum for the community.
In next days, if I'lll be able to find some free times, I'll try to have a look.
The memory leak starts being reproducible on this commit: 960d465 (#869).
@florimondmanca Pinging you in case you have an idea. :kissing:
A few stats that could allow a faster review and merge for PR #1192. All scenarios running for 10 minutes in the same testing environment:
0.13.4
: 0 MiB memory increase (initial: 22 MiB, after tested period: 22 MiB)master
: 78 MiB memory increase (initial: 22 MiB, after tested period: 100 MiB)master
, with PR #1192 applied: 5 MiB memory increase (initial: 22 MiB, after tested period: 27 MiB)cant reproduce using the gist provided, I'm running the dockerfile then sending while true; do nc -zv localhost 8000;done
to the exposed port, what am I missing ?
is it reproducible without k8s ?
cant reproduce either with tcping,
running while true; do ./tcping localhost 8000;done
I get localhost port 8000 open.
messages
mem usage seems stable
2021-11-07 11:46:15,116 Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
2021-11-07 11:46:15,120 Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
2021-11-07 11:46:18,121 [36.62890625, 36.62890625, 36.62890625, 36.62890625, 36.62890625]
2021-11-07 11:46:18,131 Number of unreachable objects 0
2021-11-07 11:46:21,135 [36.828125, 36.828125, 36.828125, 36.828125, 36.828125]
...
...
2021-11-07 11:51:16,567 [36.828125, 36.828125, 36.828125, 36.828125, 36.828125]
2021-11-07 11:51:16,577 Number of unreachable objects 0
I have created a gist with the configuration I used to bisect the memory leak, based on what @KiraPC provided, but using tcping
within Docker, and docker stats
to avoid any other dependencies that could affect the memory measurements.
https://gist.github.com/adamantike/d2af0f0fda5893789d0a1ab71565de48
ok I tried that and it's stable after 5 min with 5 tcping terminals open
2021-11-07 18:22:42,879 Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
2021-11-07 18:22:45,879 [36.47265625, 36.47265625, 36.47265625, 36.47265625, 36.47265625]
2021-11-07 18:22:45,889 Number of unreachable objects 0
2021-11-07 18:22:48,892 [36.66796875, 36.66796875, 36.66796875, 36.66796875, 36.66796875]
...
...
2021-11-07 18:28:14,873 [36.66796875, 36.66796875, 36.66796875, 36.66796875, 36.66796875]
2021-11-07 18:28:14,882 Number of unreachable objects 0
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
09a8e30c429f fervent_robinson 1.10% 32.91MiB / 62.87GiB 0.05% 102kB / 66.6kB 36.8MB / 0B 5
I'm going to post my results later today. But as a quick look, I see that your machine is far more resourceful than mine, so maybe that's why you can't reproduce the issue.
Like, I've noticed that if I run the script for 100 iterations, I don't have any problems.
@KiraPC @adamantike Thanks so much for the detailed reports and debugging so far.
IIUC this seems to be related to #869 introducing some kind of connection leak that's visible if we let a server hit by health checks run for a few minutes.
The keepalive timeout exploration in #1192 is interesting. I wonder if the keepalive task is being properly cleaned up too? We use a separate connection_lost
future now. Maybe we need to pass it to the keepalive task?
Opened #1244 with a possible fix — at least on my machine. Happy for you to try it out @KiraPC @adamantike.
Edit: meh, #1244 seems to break a bunch of fundamental functionality, at least with the test HTTPX client. Needs refining…
@euri10, did you try out the gist I shared, and used something like docker-compose up --build --scale tcping=20
to speed up the memory leak?
I have the same issue in version 0.15.0
Picture from pods
The app has only the health check because We are removing the Gunicorn + UvicornWorker and I trying to run the app without restarting the worker with the Uvicorn (Tests).
Any progress?
Tangentially related, if you're already using Kubernetes there's no reason (unless you have something specific to your project) to use Gunicorn as a process manager. Kubernetes should be running your workers directly. Less layers, less complexity and your readiness/liveness probes will be more correct.
@euri10, did you try out the gist I shared, and used something like
docker-compose up --build --scale tcping=20
to speed up the memory leak?
I could only reproduce in docker which I find super strange, locally there's no leak at all I can trigger even by spawning 20 tcping (for i in {1..20};do tcping --port 8080 --count 10000 localhost &;done
) processes which is what the scale compose does.
I amended the app to get a sense of where it could happen (https://gist.github.com/euri10/cfd5f0503fdb7b423db7d6d4d76d5e8e)
here's the log from the docker api https://gist.github.com/euri10/0731a5487c349040a2586ce6a9ba71a7
edit: discard that ~~seems like a docker thing to me, with the app running locally and the tcping compose scaled to 100 I cant reproduce. so there might be something network related within the api container that causes the oom issue~~
can you check @florimondmanca PR in #1244 with the small change I made ?
EDIT: it seems that I cannot reproduce the numbers.
I ran some more tests with images to grow memory usage faster. I arrived to 2 conclusions:
asyncio
and not uvicorn
Code to reproduce the memory consumption with asyncio
only:
import httpx
import asyncio
async def main( host, port):
async def handler(reader: asyncio.StreamReader, writer: asyncio.StreamWriter) -> None:
async with httpx.AsyncClient() as client:
r = await client.get('https://raw.githubusercontent.com/tomchristie/uvicorn/master/docs/uvicorn.png')
data = await r.aread()
body = "Hello World!"
response = "HTTP/1.1 200\r\n"
response += f"Content-Length: {len(body)}\r\n"
response += f"Content-Type: text/html; charset=utf-8\r\n"
response += "Connection: close\r\n"
response += "\r\n"
response += body
writer.write(response.encode("utf-8"))
await writer.drain()
writer.close()
server = await asyncio.start_server(handler, host=host, port=port)
async with server:
await server.serve_forever()
asyncio.run(main("0.0.0.0", "8000"))
This sample code started ~20MB of ram usage and after 30k requests used ~280MB.
was reading quickly and about to say there is no drain and there is in fact, would that explain why any framework / server combination you tried has a growing memory usage ?
On Tue, Nov 23, 2021 at 12:16 PM Evaldas Kazlauskis @.***> wrote:
I ran some more tests with images to grow memory usage faster. I arrived to 2 conclusions:
the issue is with asyncio and not uvicorn the memory stops growing after a while. I cannot tell when, but I saw that memory growth stabilized after some time.
Code to reproduce the memory consumption with asyncio only:
import httpx import asyncio
async def main( host, port): async def handler(reader: asyncio.StreamReader, writer: asyncio.StreamWriter) -> None: async with httpx.AsyncClient() as client: r = await client.get('https://raw.githubusercontent.com/tomchristie/uvicorn/master/docs/uvicorn.png') data = await r.aread()
body = "Hello World!" response = "HTTP/1.1 200\r\n" response += f"Content-Length: {len(body)}\r\n" response += f"Content-Type: text/html; charset=utf-8\r\n" response += "Connection: close\r\n" response += "\r\n" response += body writer.write(response.encode("utf-8")) await writer.drain() writer.close() server = await asyncio.start_server(handler, host=host, port=port) async with server: await server.serve_forever()
asyncio.run(main("0.0.0.0", "8000"))
This sample code started ~20MB of ram usage and after 30k requests used ~280MB.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
Yes, I think this would explain why all framework/server combinations saw memory consumption growth. I would say this is an issue or inefficiency of python itself.
@evalkaz, actually, the issue you are seeing seems related to https://github.com/encode/httpx/issues/978#issuecomment-632634459, which root cause is the creation of new httpx.AsyncClient
on each request. If you change your implementation to the following, you should stop experiencing that memory leak:
import httpx
import asyncio
async_client = httpx.AsyncClient()
async def main( host, port):
async def handler(reader: asyncio.StreamReader, writer: asyncio.StreamWriter) -> None:
r = await async_client.get('https://raw.githubusercontent.com/tomchristie/uvicorn/master/docs/uvicorn.png')
# ...
It seems the httpx
code snippets weren't updated to reflect that best practice.
@adamantike thanks for a tip, but even if I initialize httpx.AsyncClient()
outside main()
my memory usage grows. Started with ~22MB and after sending it 40k requests the app used ~884MB memory.
I don't think it's a good idea to add potential issues and memory leaks from a different third-party library to the mix (in this case, httpx
). Are you able to reproduce that memory leak with only asyncio
(plus only stdlib)?
Regarding your memory usage table per library/ASGI server, can you get some numbers after a greater amount of requests? Going from 18MB to 39MB could seem like a lot, but is not enough of an indication for a memory leak if it stabilizes at that number (because of caches, initializations, and anything lazily created that is not there yet when the server hasn't received any requests).
It seems that I cannot reproduce the memory increase anymore :thinking: I ran my experiments in docker and taken notes about usage, but as of today the memory stays +- the same after 10k requests. I guess base docker image or any of the python dependencies changed, but I have no idea what happened.
@florimondmanca Do you have a suggestion on how to handle this issue? Should we revert the changes and then retry to introduce it later on?
This should be solved by #1332. Can someone else confirm?
I tested again using the instructions from https://github.com/encode/uvicorn/issues/1226#issuecomment-962644922, with 20 concurrent tcping invocations over 10 minutes.
0.17.0
: 184 MiB memory increase (initial: 22 MiB, after tested period: 206 MiB)0.17.1
: 0 MiB memory increase (initial: 22 MiB, after tested period: 22 MiB) :tada: Thanks @adamantike :)
For the record: Issue was solved by uvicorn 0.17.1.
@Kludex is the leak still fixed now that https://github.com/encode/uvicorn/pull/1332 reverted?
We reverted the changes on 0.17.1 (reverted #1332) as it caused other issues. We've taken another approach, which reverted #869.
There were a sequence of small patches on 0.17.x
, and we recommend to use 0.17.4
(the latest at the moment of this comment).
I'll be locking this issue as I think further comments here will mislead other users. In case you feel like you've found a similar issue, or any other bug, feel free to create a GitHub issue or talk with us over Gitter.
Checklist
master
.Describe the bug
I'm developing a FastApi application deployed on a Kubernetes cluster using gunicorn as process manager.
I'm also using UvicornWorkers for sure, because of the async nature of fastapi.
After the application deployment I can see the memory growing up at rest, until OOM.
This happen just when I use UvicornWorker.
Tests made by me:
Plus, this happens just on the Kubernetes cluster, when I run my application locally (MacBook pro 16) (is the same docker image used on k8s) the leak is not present.
Anyone else had a similar problem?