It appears to be taking 8 GB of ram to do the source deblending on this single image (see plot)
While this doable for a single image on a laptop, this makes doing parallel reductions infeasible. Is there something in how we slice the images that is hanging on to memory when it is not needed?
I am attempting to run image segmentation on an image from a reasonably large chip and the memory usage is higher than I would have hoped.
This script produces the said behavior.
You can download the image from the url value at https://archive-api.lco.global/frames/56127995/
It appears to be taking 8 GB of ram to do the source deblending on this single image (see plot)
While this doable for a single image on a laptop, this makes doing parallel reductions infeasible. Is there something in how we slice the images that is hanging on to memory when it is not needed?