aherbert / gdsc

GDSC ImageJ Plugins
http://www.sussex.ac.uk/gdsc/intranet/microscopy/UserSupport/AnalysisProtocol/imagej/gdsc_plugins/
GNU General Public License v3.0
5 stars 1 forks source link

"An error occurred during processing: 51745980" with larger stacks #2

Open lollopus opened 5 years ago

lollopus commented 5 years ago

Hi Alex, I started to explore the use of FindFoci GUI to count GFP fluorescent neurons in deconvolved and stitched brightfield stacks of cleared tissue, but I am getting an "An error occurred during processing: 51745980" whenever I use stacks larger than a certain size. For instance I'm getting it with a stack of size 1430x326x111 (32bit), which is just a cropped version for testing of what I would like to use for my actual analyses (up to 1400x5000x100).

Please note that I'm using a parameter set that works reasonably well in identifying my interest neurons in a highly cropped version of the same stack.

The console throws the following:

Exception in thread "Thread-5" java.lang.ArrayIndexOutOfBoundsException: 51745980
    at gdsc.foci.FindFociFloatProcessor.buildHistogram(FindFociFloatProcessor.java:197)
    at gdsc.foci.FindFociFloatProcessor.buildHistogram(FindFociFloatProcessor.java:138)
    at gdsc.foci.FindFociBaseProcessor.findMaxima(FindFociBaseProcessor.java:211)
    at gdsc.foci.FindFoci.findMaxima(FindFoci.java:1835)
    at gdsc.foci.FindFoci.exec(FindFoci.java:991)
    at gdsc.foci.controller.ImageJController.run(ImageJController.java:270)
    at java.lang.Thread.run(Thread.java:748)

My system has 21GB allocated to Fiji. Is this an issue that you could look into or am I just missing something with the choice of parameters?

aherbert commented 5 years ago

This is a bug. I'll fix it next week.

Never seen this before as I have not used the 32 bit support very much.

Thanks for reporting it.

Alex

lollopus commented 5 years ago

To add to my previous report I am also having blocking issues with the core FindFoci plugin when attempting to process a large stack. Perhaps this is related to the same problem encountered in the GUI plugin (see above), perhaps not.

At any rate, the input stack is 2184x8954x125 (32bit) and it occupies 3.8GB of RAM.

When I run FindFoci with parameters obtained using the optimizer I get an out of memory error "<All available memory (21333MB) has been used>. This also occurs if I select a very small rectangular ROI in the stack before running the plugin.

Are such large memory requirements to be expected? Even if this is not a bug, would you consider optimising the code to reduce them?

Thanks Alex for your feedback.

aherbert commented 5 years ago

whenever I use stacks larger than a certain size

I've fixed this and released a new version via the ImageJ update site. The code should not error now. But it will still run out of memory on large images.

Are such large memory requirements to be expected?

Yes, unfortunately.

More for the GUI as it caches intermediate states but the plain plugin has large memory requirements too.

The code is currently optimised for speed over memory requirements. It actually creates a new version of the input image using a single linear array for 3D images (using 4 byte datatype, either int or float). This is then matched by arrays to hold the currently assigned maxima (as an int (4 byte) datatype) and for state processing (as a byte datatype).

For 8/16-bit image an image histogram is efficiently stored and not a big memory requirement.

In the case of 32-bit float images there is also a float histogram that can be up to the same length in size (using a 4 byte value and 4 byte count storage) and a look-up table using int (4 byte).

This equates to:

On top of this there will be the results data, plus correspondingly sized output masks if requested.

So unfortunately your 3.8GB image would use 3.8 * 21/4 = 19.95 GB in the worst case.

The algorithm could be altered to not duplicate the original input image. This is possible via an abstraction layer but would slow down the processing a fair bit.

It is not really possible to avoid the 4 byte array for current maxima. This used to be a 2-byte array but was changed to support large images which can easily have more than 65335 candidate maxima.

The state array of 1 byte also cannot be changed. It is already reused to enable more than 8 states per pixel to be processed.

You could try using a 16-bit images. Do you absolutely need the 32-bit data? If you convert to 16-bit with scaling (stretching the histogram to be 0-65335) you should be able to find very similar foci. You will just lose the exact intensities in the results table that may be of analytical value. You could recreate the original values if you do a custom scaling and store the minimum and scale factor for the conversion:

(float - min) * (max - min) = int int / (max - min) + min = float

very small rectangular ROI in the stack

The current code has no allowances for crops. This is because the default processing estimates the background using the entire image. With a crop the background estimate will be different and the algorithm produces different results. This was deemed to be unintuitive so the entire image is processed and anything outside the ROI is ignored when finding maxima, but used for background estimation.

This support could be changed to use processing within the bounding box of the ROI.

would you consider optimising the code to reduce them?

An option for large images may be the way forward. This can use specialised processing with less memory use. It will still be a memory hog though due to the working state for each pixel that the algorithm requires. The best case would be to not duplicate the input image and support crops making all the working memory the size of the crop (plus a 1 pixel border needed to identify the same local candidate maxima).

Please start a new issue and I can update it when things improve for big image support.

lollopus commented 5 years ago

Great detailed explanation! I humbly suggest that you add an abridged version of it as a paragraph at the beginning of the manual, as it helps a lot in planning stack sizes and bit depths to avoid running out of memory (rather than trial and error).

As for my current analysis your suggestion to scale up to 16 bit is clearly the way to go. I’ll proceed in the next few days.

Finally, I agree that implementing crops would be helpful. In fact I’m using absolute backgrounds so there wouldn’t be any ambiguity in this case. Ideally you could extend this optimization also to input masks by creating arrays for the envelope rectangle...?

One last feature might be to predict the required memory and giving user feedback prior to running the plugin. The BigStitcher fusing step has a similar feature for instance.