Open aIligat0r opened 7 months ago
It looks like it thinks the errors might be coming from lines 192 and 700?
If I increase the number of loops to 1000, I find that the memory drops down again at a certain point. Since it is not continuously increasing, I don't think it is a leak.
You might like to read #7935, in particular https://github.com/python-pillow/Pillow/issues/7935#issuecomment-2031804237
Pillow's memory allocator doesn't necessarily release the memory in the pool back as soon as an image is destroyed, as it uses that memory pool for future allocations. See Storage.c (https://github.com/python-pillow/Pillow/blob/main/src/libImaging/Storage.c#L310) for the implementation.
If I increase the number of loops to 1000, I find that the memory drops down again at a certain point.
It would probably be good to add gc.collect()
(import gc
) to the end of each loop. It might just not be running that often.
It would probably be good to add gc.collect() (import gc) to the end of each loop. It might just not be running that often.
I added gc.collect()
in, but it doesn't make an obvious difference.
It looks like it thinks the errors might be coming from lines 192 and 700?
I see line 1083 on my machine. This would be easier to discuss if the original image could be uploaded here.
I have previously encountered an error in Webp formats. The leak was in the WebPImagePlugin plugin. I solved this by changing the value of the variable HAVE_WEBPANIM (PIL._webp.HAVE_WEBPANIM) to False.
If I test a WebP image with your above code, I again find that the memory drops down again at a certain point.
If I increase the number of loops to 1000, I find that the memory drops down again at a certain point.
It would probably be good to add
gc.collect()
(import gc
) to the end of each loop. It might just not be running that often.
This saved me a lot. I encountered a similar problem where I load HD images into memory (35MiB per Image), saw that the RAM quickly rose up to 1 GiB within short span of time. I solved this by assigning img=None
and gc.collect() at the end of the every iteration in the loop. For some reason garbage collector does not seem to collect garbage variables quickly enough in multithreaded/multiprocessing setting
Hello! I have not found a solution to this problem. If there was a decision, then I apologize in advance.
When opening images, the amount of memory is constantly increasing. The library somehow saves the open images. I'm opening an image in context.
Simulated leak:
Out:
It looks like a memory leak in the plugin PngImagePlugin.py
If there is a solution how to get around this problem in tasks where you have to open a lot of images?
I have previously encountered an error in Webp formats. The leak was in the WebPImagePlugin plugin. I solved this by changing the value of the variable HAVE_WEBPANIM (PIL._webp.HAVE_WEBPANIM) to False. But now I'm facing a problem in PngImagePlugin