sylvainprigent / sdeconv

2D and 3D image deconvolution with python
Other
13 stars 5 forks source link

SPITFIR(e)'s losses get nan. #1

Closed NaokiThread closed 2 months ago

NaokiThread commented 1 year ago

I want to use SPITFIR(e) in order to make high-quality immunofluorescent images. However, the losses get nan in some images. How can I cope with this problem? I have already tried the following approaches.

  1. Changing the parameters for Spitfire. The parameters I tried changing were weight, gradient_step, precision, and pad.
  2. Inserting the code below in spitfire.py.
    loss.backward()
    torch.nn.utils.clip_grad_norm_(deconv_image, max_norm=0.02)
    optimizer.step()
sylvainprigent commented 1 year ago

Hello,

I observe this NaN loss sometimes on images where the signal was low. One normalization method that may help is to normalize the PSF intensity by the sum of all the PSF intensities, and normalize the image signal in float in [0, 1] (by max or sum).

NaokiThread commented 1 year ago

Thanks a lot to answer my question. I'll try normalization.

NaokiThread commented 1 year ago

Hello.

image = image_full.copy()[:, :, 2]
image = torch.Tensor(image)
#print(image.shape)
image_min = image.min()
image_max = image.max()
#print(image_min, image_max)
image_normalized = (image -image_min)/(image_max - image_min)

psf_generator = SPSFGaussian((1.5, 1.5), (13, 13))
psf = psf_generator()
psf = psf.to(device)
psf = psf/psf.sum()

filter_ = Spitfire(psf, weight=0.6, reg=0.995, gradient_step=0.01, precision=1e-7, pad=13) 
out_image = filter_(image_normalized)

I tried the code above and ran spitfire to the image_normalized, but the losses got nan after iter 1. (Only in iter 1 I was able to get non-nan losses.)

What should I do to get a plausible return?

image

This is a color intensity histogram of image_normalized.

rohud91 commented 9 months ago

Hello, I have the same issue. Is there any solution or steps that help with this issue? Thanks for help

sylvainprigent commented 9 months ago

Hello,

I experienced once the same issue. I figured out that the NaN values originated from the image background that is zero, or at least have large regions with zeros. These zeros causes a singularity in the loss function. In my case I solved this issue by adding an offset (of 1) to the image.

boydcpeters commented 6 months ago

Hi, I am experiencing the same issue. As recommended, I tried adding an offset of 1 to all pixel values and have tried normalization. Nevertheless, the loss becomes NaN after the first iteration. I checked during the running of the code and quite a lot of values in the deconvolved image are set to NaN. Any idea how to fix this?

sylvainprigent commented 6 months ago

Hello,

I created a new tag v1.0.2 that fix the issue of NaN when they are background area filled with zeros. So If your NaN error comes from this issue it should fix it.

Nevertheless, the deconvolution algorithm expect a "natural" image with noise everywhere in the image to apply the inverse model. So regions with zero values make me think that the image have been pre-processed or manipulated during acquisition with offset or something similar. I cannot guaranty in this situation that de deconvolution will works.

boydcpeters commented 6 months ago

Thanks for getting back to me so quickly and sorting out a fix.

I appreciate your insight regarding the deconvolution algorithm’s expectation of a “natural” image. This makes total sense, and my images contain too many underexposed pixels. I am currently attempting to apply this to expansion microscopy data, which presents its own set of challenges. The fluorophores are relatively sparse and the overall signal is diminished due to the dilution of fluorophores during the expansion process. Furthermore, the samples are relatively thick compared to normal fixed cells. Nevertheless, I am still optimizing the acquisition process to reduce the number of underexposed pixels.

Regarding the deconvolution process, my understanding is that adding an offset during acquisition is acceptable as long as it doesn’t result in underexposed (0-valued pixels) or overexposed pixels. Would you say that this assumption is correct?

Once again, thanks for the help!

sylvainprigent commented 6 months ago

To create the best conditions for image deconvolution, I would play with illumination power, exposure time and camera/detector gain to try to fill, as much as possible, all the detector range without having neither underexposed or overexposed pixels. This allows an image with best intensity quantisation and no transformation of the noise. That's why in general I would try to avoid using offset.

I understand that it is tricky to setup an optimal image acquisition with biological data. So, if you must use offset, I agree that you need to be careful of not having areas in the image with underexposed or overexposed pixels. Otherwise, it will change the noise distribution in the image and the deconvolution might not work anymore.

boydcpeters commented 6 months ago

That makes total sense, thanks. I will keep it in mind during the optimization of the image acquisition.

I tried your quick fix for the NaNs on one of my Z-stacks, and am already pretty happy with the results. So I will do some further acquisition optimization and some more tweaking, but thanks for developing this tool.