phizaz / diffae

Official implementation of Diffusion Autoencoders
https://diff-ae.github.io/
MIT License
836 stars 128 forks source link

About Stochastic encoder #51

Open chuchen2017 opened 1 year ago

chuchen2017 commented 1 year ago

Thanks for your excellent work! It is very inspiring! I have a question about Stochastic encoder. Equation 8 in the paper is described as the reverse of equation 1. Equation 8 uses the U-Net ϵθ(xt, t, z) trained in training process to generate x_t+1 from x_t. However, as far as i can see, the ϵθ was trained for denoising from x_t to generate x_t-1. More specificly, ϵθ is used to predict noise that already exist in x_t, why Stochastic encoder uses the noise that is predicted to be exist currently by ϵθ to map the picture to latent space? Thanks for your answering!

phizaz commented 1 year ago

Equation 8 uses the U-Net ϵθ(xt, t, z) trained in training process to generate x_t+1 from x_t. However, as far as i can see, the ϵθ was trained for denoising from x_t to generate x_t-1.

Let me first say that the U-Net predicts the noise within the image, which can be thought of as a direction of change from $x_t$ to $x_0$ (I mean $x0$, not $x{t-1}$). However, your intuition is not wrong that Eq 8 is between $x_{t+1}$ from $xt$ but Eq 1 is between $x{t-1}$ from $x_t$. How could both use the same direction from the same model? Thinking of the limit where $\Delta t \rightarrow 0$, the changes from $xt$ to $x{t+1}$ and from $x_{t-1}$ to $x_t$ are actually described by the same direction! This is how you obtain Eq 8.

More specificly, ϵθ is used to predict noise that already exist in x_t, why Stochastic encoder uses the noise that is predicted to be exist currently by ϵθ to map the picture to latent space?

I'm not clear about this question. In general, the stochastic encoder turns image $x_0$ into a specific noise map $x_T$ such that the render of that noise gives back the same initial image. It's fitting that the stochastic encoder would incrementally turn $x_0$ into $\epsilon$ (which is what $x_T$ is).

chuchen2017 commented 1 year ago

Thanks for your reply! I understand what Stochastic encoder are trying to do in this model. But I'm still confused by the arugment that changing direction of x_t to x_t+1 is same with that of x_t-1 to x_t. Any mathmatical provement can be provided to illustrate the process? Or can you provide me with any other papers used the same process you might have refered while doing your work. I am deeply grateful for your help!