WeisongZhao / sparse-deconv-py

Official Python implementation of the 'Sparse deconvolution'-v0.3.0
Open Data Commons Open Database License v1.0
48 stars 17 forks source link

Sigma value is not Gauss sigma it is FWHM #2

Open Counterfeiter opened 1 year ago

Counterfeiter commented 1 year ago

Hello,

there is a small bug, that confused me... Your sigma value is FWHM not Gauss sigma. You convert FWHM to sigma here:

https://github.com/WeisongZhao/sparse-deconv-py/blob/88c367a678048f7207392de91d550f6c1a1d6263/sparse_recon/iterative_deconv/kernel.py#L15

I'm an amateur astronomer, I tested your work on my noisy images: https://forum.astronomie.de/threads/sparse-deconvolution.339232/

Works great... but didn't compare it with other methods....

best regards Sebastian

WeisongZhao commented 1 year ago

Hello Sebastian,

Yes, you are right. The input is actually the FWHM of the PSF, which is the resolution of the system, rather than the true sigma value. In the microscopy usage, we are usually aware of the resolutions of the corresponding microscopes.

I am really happy you applied the method on other modalities we did not have the chance/condition to test. Please let me know if you have other questions (see also https://bioengineeringcommunity.nature.com/posts/physical-resolution-might-be-meaningless-if-in-the-mathematical-space).

Some small tips for you to adjust parameters on one dimension:

Hope this would help, Weisong

Counterfeiter commented 1 year ago

Hi Weisong,

thanks for your help. In astronomy the effective PSF could be calculated from the stars in the image (nearly perfect probes). But it has its limitations coming from other optic imperfections, like Coma, and also movements from the mechanic. I have a bunch of questions, would be really nice to address it here:

  1. Has the Sparsity optimization "knowledge" over the RL deconv. following up or is it completely independent? I asked because I have rewritten the sigma parameter, so it can carry an imported PSF as an image.
  2. It seems that the algorithmic can improve if a continuity is present. What ever that means in microscopy. How could I transfer it to astronomy? Normally you end up with hundreds of single exposure images and stack them together to improve the SNR. Is the stacking part comparable with the xy-t dimension? xy-z Dimension could only be a small focus (out of focus) change over the different samples. Is this knowledge that works for the sparsity optimization? Would be interesting for lucky imaging: Try to beat the atmospheric wobbling by selection of the sharpest shots. There is normally a "focus" change happening.
  3. Apropos dimension, there is a note, that 3D inputs will follow later. Did you mean xy and color layers?
  4. How imported are the background models? Background 1 and 2 are close at my tests and Background 3 to 5 never working for my applications. The image gets bright (linear stretching) and dynamic seems lost.

best regards

Sebastian

WeisongZhao commented 1 year ago

Hi Sebastian,

And sorry for the delayed answer. Because lack of the basics of astronomy, here I will try my best to discuss your questions :):

  1. The image after the first-step reconstruction (sparsity-continuity reconstruction) usually has a different PSF compared to its static system response. Sparsity and continuity will both influence the PSF of the image. Thus, the PSF after the first-step reconstruction is adjustable and should be adjust according to deconvolved results. This actually is not a feature to me, and is inconvenient for the biologists who will use this tool, so I'm working on it to remove the exposed adjustable parameters.
  2. This sparse-deconv-py program is currently made for 2D deconvolution. But the xy-t and the xy-z will both improve the performance.

Regarding the xy continuity's meaning, here I directly quote a piece of words from our paper:

The continuity is For any fluorescence microscope, to ensure sufficient Nyquist sampling criteria for maximal spatial resolution dictated by the optics, the point spread function (PSF) must occupy more than 3×3pixels in space (Supplementary Fig. 3c), which constitutes the basis for the continuity along x and y axes of any fluorescence microscope.

Regarding the t or z continuity, if the positions of the stars are relatively static during your imaging, it will work. Because the noise and atmospheric wobbling are not continuous along the t or z axial. Also the t or z sparsity will also contribute because the sparsity will clean the out-of-focus and noise background before remove the real shining stars.

  1. The 3D is the xyz but not the color channel.
  2. The background is actually simple. It's removing the low-frequency components of the images. The different choices denote the different thresholds for background estimation.

BTW, will the intensity of the stars (the true "burning" intensity, not the measured intensity) remain stable, or it will blink or fluctuating?

Hope this would help, Weisong

Counterfeiter commented 1 year ago

Hi Weisong,

for the most stars the intensity will be stable in the observation time.

In astronomy it is the same problem: The calculated PSF from the stars is mostly to large to have a good deconvolution without amplifying the noise. So a good workaround for this is, shrink the effective PSF because 1000 rounds RL is no option: https://forum.astronomie.de/attachments/different_psf_iter-gif.307642/

I think some parameters like the sparse iterations are also optimize able by some thresholding. I made a historgram over the sparsity iteration. My guess what could work for my input data: When the "walking" peak with the noise begins to stop or came in the lower 1% region some iteration with good frequency spectrum response could be selected. Histogram:

Frequency Spectrum of the Histogram (x-Axes Frequency Bins, Y-Axis Magnitude):

I will make some tests with a lucky imaging row to check the t-direction also. Thanks for your help. Would be good to read the paper anywhere?!

best regards Sebastian

WeisongZhao commented 1 year ago

Hi Sebastian,

It's really nice to see these frequency spectrums. In my usage, the optimazation converges usually at sparse iteration =100. Because we can use simulation data, and monitor the PSNR(ground-truth,iter_{n})/RMSE(iter_{n+1},iter_{n}), the values of PSNR/RMSE will begin stable at around sparse iteration =100.

Regarding the paper: Here is the link you can read the paper: https://rdcu.be/c527H. Also the Supplementary Notes: https://static-content.springer.com/esm/art%3A10.1038%2Fs41587-021-01092-2/MediaObjects/41587_2021_1092_MOESM1_ESM.pdf. (Note 3 is recommended).

I'm particularly interested in the shrink the effective PSF you said, because I'm writing a short discussion manuscript on it, do you have any references on it, in an astronomy context? To my knowledge, there is no such cognition in the field of microscopy.

Hope this would help, Weisong

Counterfeiter commented 1 year ago

Hi Weisong,

thanks for the paper. I think/hope I understand the solution.

Vary the PSF size is for me just one of the millions regulation options there are developed for the RL. But that is just my experience from my tests. I did google a bit and found a paper that describes basically the same: https://www.sciencedirect.com/science/article/abs/pii/S003039921300409X

Meanwhile, when PSF is large, ringing artifacts will become very intense in the deblurred results. Strong regularization could reduce these artifacts but also destroy the texture in the image, which is always an intractable difficulty in image deblurring.

So, I am very lucky that your paper works with the bare RL algorithm and not that fancy extra regulation stuff.

I have one more question to the paper. A lot of structures you investigate seems to have a good contrast to the background. I seems there is a option to threshold the noisy background away: image[image < 0.1] = 0.0 or something simple like this. Did you compare this with your sparsity calculation? Does the RL step improve only from background noise removal or also from the noise reduction in the desired structure parts? I have seen only companions between RL on noisy data without a second option to remove the noise level or did I miss something?

best regards

Sebastian

WeisongZhao commented 1 year ago

Hi Sebastian,

It's a good question, and indeed the RL will benefit from the removed background noise. A simple background threshold will make RL work better. Maybe it has been a routine operation in your field.

The showcases in the paper are the comparisons of RL deconvolution and our sparse deconvolution (or RL deconvolution after the sparsity-continuity reconstruction) and we did not particularly include the operation image[image < 0.1] = 0.0.

Certainly, there are differences between sparsity reconstruction versus flat background threshold. The sparsity will adaptively remove the background noise, and increase the contrast-inside/outside-the-OTF (or shrink the effective PSF) of the images. Actually, the Hessian continuity will also contribute to remove the random noise background.

Best, Weisong