mehta-lab / microDL

3D virtual staining with 2D and 2.5D U-Nets
BSD 3-Clause "New" or "Revised" License
27 stars 7 forks source link

Inference image shape and pixel values #177

Open JohannaRahm opened 1 year ago

JohannaRahm commented 1 year ago

When applying inference to images of shape 2562x2562 px² they are cropped to 2048x2048 px². Only the inferred image is saved and this makes it impossible to further compare ground truth and input images to the inferred images as the exact cropping area is unknown.

Furthermore, the inferred image has not the same dynamic range and the ground truth image. In the inference figure both target and prediction have pixel values ranging up to 33K. However, the ground truth image only has pixel values up to 280. The inferred image is stored with values up to 33K.

Both scenarios make it hard to further compare ground truth and inferred image outside of microDL. Could we think of a strategy to solve this?

Pixel values of target image image

Inference figure showing different pixel values for target image image

Commit 151cc258d5ee5212c85cefdd5f1fc9f7a26b6a41 master branch

Christianfoley commented 1 year ago

Hi @JohannaRahm, have you noticed the cropping issue happening in both 2D and 2.5D models, or have you only tried 2D model inference?

I cannot find anywhere in the inference pipeline where we hard-code a 2048 pixel limit. In your configuration files, have you changed the "dataset->height" and "dataset->width" parameters to 2562?

JohannaRahm commented 1 year ago

I created an example with our models to make it easier to find the error. The inference data contains 3 FOV with size 2048x2048, 2000x2000, and 1000x1000. They are cropped to 2048x2048, 1024x1024, and 512x512 respectively.

Here the paths:

The width and height of the inferred data are not specified and in the scenario posted above the sizes of images in the inference data slightly differs, which make specification not possible. Looking at the sizes of the inferred images, they seem to be cropped to something which is dividable by tile size (256x256). Is specification of inference size a must and if yes why and where? The only width and height defined in the yml files are the tile sizes.

I have only tried 2D model inference.

In this test the inference code from this PR https://github.com/mehta-lab/microDL/pull/155 is used, but the test above showed that this unexpected behavior also occurs in commit https://github.com/mehta-lab/microDL/commit/151cc258d5ee5212c85cefdd5f1fc9f7a26b6a41 master branch.

JohannaRahm commented 1 year ago

Update: The pixel values are correctly presented in microDL. Fiji shows two versions of pixel values for these images -> see screenshot with example value = 83 (32851), where 32851 is the value stored in the pixel - rightly captured by microDL. image

Soorya19Pradeep commented 1 year ago

I have the same issue as @JohannaRahm with the inference images produced by microDL. The 2012x2012 input images (images resized on x-y registration) used for microDL inference produces 1024x1024 output images. The central 1024x1024 pixels in the image are chosen to run the inference.