In practice, the U-Net Jacobian term is expensive to compute (requires backpropagating through the diffusion model U-Net),
and poorly conditioned for small noise levels as it is trained to approximate the scaled Hessian of the marginal density.
We found that omitting the U-Net Jacobian term leads to an effective gradient for optimizing DIPs with diffusion models.
Has anyone tried to incorporate U-Net Jacobian term into the gradient? What is the results? I did several experiments, but it seems that the generated results are poor.
In Dreamfusion, there is a paragraph:
Has anyone tried to incorporate U-Net Jacobian term into the gradient? What is the results? I did several experiments, but it seems that the generated results are poor.