StanfordMIMI / DDM2

[ICLR2023] Official repository of DDM2: Self-Supervised Diffusion MRI Denoising with Generative Diffusion Models
145 stars 21 forks source link

Question about the diffusion training process. #28

Open vfcerexwn opened 7 months ago

vfcerexwn commented 7 months ago

I'm having difficulty understanding the code you provided. Could you please clarify the following points for me?

  1. Does x_noisy represent noisy images at different steps t?
  2. Is x_recon supervised by another noisy observation of the clean image?
  3. Typically, in diffusion models, isn't noise estimated step by step? But according to this code, we directly estimate the image. Thank you for your patience and assistance in clarifying these points. “”“ x_noisy = self.q_sample( x_start=x_start, continuous_sqrt_alpha_cumprod=continuous_sqrt_alpha_cumprod.view(-1, 1, 1, 1), noise=noise.detach()) x_recon = self.denoisor(x_noisy, continuous_sqrt_alpha_cumprod)

    J-Invariance optimization

    total_loss = self.mseloss(x_recon, x_in['X']) ”“”

tiangexiang commented 7 months ago
  1. Yes.
  2. Partly yes. x_recon is supervised by a noisy observation indeed, but it is not a 'clean image' with manually injected noise.
  3. Yes, theoretically, diffusion models infer posterior at each time stamp in order to satisfying the Bayes' equation, to make sure it is theoretically correct. In practice, posterior at each time stamp is obtained via injecting noise (with proper noise scheduler) to a completely denoised image (generated by the denoiser). This practical implementation is adopted in many other code bases as well (e.g. SR3)