XiangLi1999 / Diffusion-LM

Diffusion-LM
Apache License 2.0
1.02k stars 133 forks source link

How did you derive your sampling algo? #39

Open jzhang38 opened 1 year ago

jzhang38 commented 1 year ago

Hi Lisa,

Thanks for your wonderful work.

May I ask how did you derive the sampling algo mathmatically for x_0 prediction? (I am looking for the sort of proof given in DDPM regarding the e-prediction)

XiangLi1999 commented 1 year ago

This is actually quite similar to the DDPM sampling algorithm. Both e-prediction and x0 prediction will be transformed back to derive p(x{t-1} | xt), and both derivation rely on x{t−1} =\sqrt{\alpha} f_\theta(xt,t)+ \sqrt{1-\alpha} * N(0,1), where f\theta(x_t,t) is the predicted x_0.

I think reading the last paragraph of section 4.2 could help.

jzhang38 commented 1 year ago

My confusion is that you appear to rely on the forward process q(x_{t-1}| x0) to sample, whereas DDPM samples by predicting the mean of backward process p(x{t-1} | xt) (which we learn through the closed form solution of q(x{t-1} | x_t, x_0)). Is there any deduction I can find (perhaps in other papers that also use x_0 prediction) to prove that these two samplings are mathematically equivalent?

In other words, DDPM samples through q(x_{t-1} | x_t, x0), but Diffusion-LM samples through q(x{t-1} | f_\theta(x_t,t)).

XiangLi1999 commented 1 year ago

Maybe checkout the last equation on page 17 of the Diffusion-LM ArXiv paper.

jzhang38 commented 1 year ago
Screenshot 2022-10-25 at 3 58 33 PM

Thanks for your prompt reply! Yeah I understand the training loss is essentially the same. My question is regarding the sampling algorithm. I think if we follow DDPM to perform sampling, we are supposed to sample with the mean as defined above, with x0 predicted by f\theta(x_t,t)