Open sinamehrabi opened 2 years ago
Hello @sinamehrabi,
pyvips and libvips are used in huge systems without problems, so I doubt if you have a memory leak. Probably it's a combination of memory fragmentation and the libvips cache.
I would experiment with something like jemalloc, and also maybe try to make a small test case that shows the problem.
I tried this:
#!/usr/bin/python3
import random
import sys
import os
import psutil
import pyvips
process = psutil.Process(os.getpid())
print(f"iteration, process size (MB)")
for i in range(int(sys.argv[2])):
with open(sys.argv[1], 'rb') as fh:
print(f"{i}, {process.memory_info().rss / (1024 * 1024):.2f}")
img = pyvips.Image.thumbnail_buffer(fh.read(), random.randint(10, 500))
data = img.write_to_buffer(".tif", compression='jpeg', Q=90)
I ran it like this:
$ time ./soak.py ~/pics/k2.jpg 20000 > memuse.csv
On this PC (16 cores, 32 threads, Ubuntu 21.10) it hits 115MB at iteration 1000 and stays there (roughly) to the end, with some noise from the Python GC.
I graphed memuse vs. time:
Hi I use pyvips for serving images with different operations on image (with fastapi and gunicorn with uvicorn) I use these codes in functions and pass from one to another:
img = pyvips.Image.thumbnail_buffer(bytes_image, x, y)
some proccess on image
img.write_to_buffer(img, ...)
I tested this code in production and monitoring tells us that we have memory leak and also file descriptors are rising I can not realize that which line of code causes this memory leak.
I tried to test all set_cache_max and similar things based on issues in code and I didn't see any effects.