fiji / microscope-image-quality

TensorFlow-based classifier for microscope image focus quality.
https://imagej.net/Microscope_Focus_Quality
Apache License 2.0
17 stars 2 forks source link

Add option to analyze only every Nth image tile #4

Closed samueljyang closed 6 years ago

samueljyang commented 6 years ago

One limitation with the plugin is that it can take a while to run for larger datasets. But oftentimes, one isn't interested in sampling the exact focus quality of the entire image, but just a rough estimate of several regions.

An optional menu parameter to subsample every Nth image tile (in x and y) would make the plugin run 1/N^2 faster, allowing the user to trade off compute time and image coverage.

ctrueden commented 6 years ago

The model on the TensorFlow side takes care of chunking the image into patches. So I see two ways forward:

  1. Update the TensorFlow model to have a parameter like you describe.
  2. Do some processing on the ImageJ side so that the entire image we feed in to TensorFlow consists of only the region(s) to be analyzed.

I am far from a TensorFlow expert, so cannot really comment on the feasibility of (1). Regarding (2), it's certainly doable, but my concern would be that the trained model might behave strangely for non-contiguous image regions, since that is not the type of data on which it was trained. To avoid the discontinuity, we could call the TensorFlow model multiple times, once per region being analyzed. Do you think that would be acceptable?

samueljyang commented 6 years ago

For (2), calling the model multiple times would work, so long as it's still faster to do so than to evaluate the entire image. But this particular TensorFlow model looks at each 84x84 image tile independently, so assembling a montage of discontinuous 84x84 image tiles would work as well.

ctrueden commented 6 years ago

@samueljyang Thanks for the info. It sounds like the way to go here then will be to add some code on the ImageJ side to create a montage image, and still invoke TensorFlow one time. Then construct the overlay on the original image accordingly.

The simplest implementation I see would be to go modulo N dimension-wise. For example: a 2D image of size 420x420 would normally be 5x5 in terms of 84x84 tiles. For N=2, we'd have the following tiles selected:

+---+---+---+---+---+
| X |   | X |   | X |
+---+---+---+---+---+
|   |   |   |   |   |
+---+---+---+---+---+
| X |   | X |   | X |
+---+---+---+---+---+
|   |   |   |   |   |
+---+---+---+---+---+
| X |   | X |   | X |
+---+---+---+---+---+

And for N=3:

+---+---+---+---+---+
| X |   |   | X |   |
+---+---+---+---+---+
|   |   |   |   |   |
+---+---+---+---+---+
|   |   |   |   |   |
+---+---+---+---+---+
| X |   |   | X |   |
+---+---+---+---+---+
|   |   |   |   |   |
+---+---+---+---+---+

Do you think that would be sufficient?

marktsuchida commented 6 years ago

Since @samueljyang asked me to comment: From my perspective, this should work fine. An alternative would be to specify the per-dimension interval in pixels, instead of tiles, which would allow for arbitrary spacing. I don't have a strong preference either way.

samueljyang commented 6 years ago

Thanks Mark and Curtis. Either would be fine for me.

ctrueden commented 6 years ago

I started coding a 2D algorithm for chopping out the tiles. I like the idea of doing the spacing in pixels instead of forcing everything to be offsets divisible by 84. I will probably make the user option something like "Percentage of image to analyze" from 1 to 100, if that works for you. The code will need to compute X and Y spacing between tiles (in pixels) such that the tiles are evenly distributed and cover approximately the percentage of image requested.