Closed mzur closed 9 months ago
@lehecht If this sounds interesting to you, feel free to assign yourself to the issue. It could get quite complicated, though. Otherwise I'll have a look when I find the time. I can imagine that this could be a big problem because high-resolution images are not that uncommon.
It might be a linux thing. I can easily cache 50 images (5.7x3.8k) with ~ 1GB of memory consumption in Firefox/Chrome (firefox needs a little more memory). How do you measure the memory consumption?
In about:memory
in Firefox.
If you are there on monday we can probably have a look at this together, but for me it's 600MB at the moment for the images.
I just noticed that the cached images in the annotation tool can take up a huge amount of memory. I have a volume with high-resolution images (6720x4480) and 7 of these cached images require almost 1 GB of RAM. Currently, the cache is configured to retain up to 200 images! This crashed my machine when I viewed too many images.
I think the maximum cache size should equal the cache size configured in the annotation tool settings (maybe plus 1 or 2).
Also, we can explore if the memory requirement can be reduced in general. Each cached image has an
img
element (source
) and acanvas
element that contains the image again (possibly with applied color adjustment). There are several opportunities to reduce the memory requirements:The
canvas
can be deleted if the image has no CORS configured. In this case the image is directly rendered from theimg
.We don't really need one canvas for each image either. The canvas is used for color adjustment and to fix the EXIF rotation. This can be probably done on the fly when an image should be rendered. Use a single canvas element, draw the new image, apply color adjustment and then pass it on to the OpenLayers map for drawing.
I think the multiple canvas elements with the uncompressed images are the main problem here.