Closed MrNaif2018 closed 1 year ago
Hi @MrNaif2018, I saw high memory usage in one of the projects that uses tronpy.
Could you please share your tests, so I'll try to reproduce and understand what's happening
Well, my tests if you mean how I detect memory leak, I just check the docker container after a week of runtime What code I run is this: https://github.com/bitcartcc/bitcart/blob/master/daemons/trx.py
If you want to try testing it (I really appreciate), you can do the following:
git clone https://github.com/bitcartcc/bitcart
cd bitcart
pip install -r requirements/base.txt
pip install -r requirements/daemons/trx.py # installs tronpy
TRX_SERVER=https://rpc.ankr.com/http/tron TRX_DEBUG=true python3 daemons/trx.py
My suspicion is that tronpy's httpx client has some leak issues, with aiohttp it is way more stable. That's why we use this temporarily: https://github.com/bitcartcc/bitcart/blob/b2f0c014779e8d329f8f476fe58ea00a9d085d3f/daemons/trx.py#L34-L53
Another fix I applied in production is jemalloc to avoid memory fragmentation, but in general the memory usage is still growing. I didn't re-run my profiler to gather flamegraph after the series of fixes I did. jemalloc seems to slow down the memory usage but it still grows. Maybe it's my app now and not tronpy, not sure I would really appreciate your help (I can even send my contacts for further discussion)
@abaybek You can contact me via chuff184@gmail.com or MrNaif_bel at telegram (preferred) if you want to debug this issue together. I've spent a few months on this :D
I googled and found these might help: https://github.com/encode/httpx/issues/978 https://bugs.python.org/issue40727
Yep, I've checked it. But the thing is, we create an async client just once. Though I tested many things and in production it is using aiohttp client + orjson + jemalloc etc, not sure which one contributed to the fix mostly
Confirmed: the library is fine.
In case you think it's a memory leak, note that the place where the leak is supposed to happen isn't always the issue. In my case the flamegraphs shown that all the leaks where in http client code, i.e. tronpy calls to httpx lib. That's actually where memory was allocated and not deallocated. But the real issue in my specific case was maintaining an uncapped list of events for example. But for production environments an allocator better than built-in malloc would only help. I.e. you can employ jemalloc for example like so: https://github.com/bitcartcc/bitcart-docker/blob/7c259836275f56f58acc755fbea6984cae0df6af/compose/trx.Dockerfile#L39
The library is good and works well in production environments!
Hi everyone! Did anyone using tronpy in production experience something like a memory leak? From my tests using aiohttp memory usage was more stable than with httpx, so maybe we should replace it?