openai / guided-diffusion

MIT License
6.11k stars 813 forks source link

gradient of log classifing probability at \mu or at x_{t+1}? #112

Open JianjianSha opened 1 year ago

JianjianSha commented 1 year ago

Thanks very much for the source code. I find there is an inconsistency when calcaluating $p(xt|x{t+1},y)$: In paper, at the place after equation (6), gradient is $g=\nabla _ {xt} \log p{\phi}(y|xt)| {x_t=\mu}$ but in code, it is

gradient = cond_fn(x, self._scale_timesteps(t), **model_kwargs)

where x represents $x_{t+1}$.

how if I replace x with p_mean_var['mean'](i.e. $\mu$)?

Thanks a lot!

dnkhanh45 commented 1 year ago

You have an interesting and precise question. Have you got the answer yet? I look forward to the answer.