NVlabs / edm

Elucidating the Design Space of Diffusion-Based Generative Models (EDM)
Other
1.37k stars 143 forks source link

Zero initialization of convolutions #3

Open nicolas-dufour opened 1 year ago

nicolas-dufour commented 1 year ago

Hi, I have observed that the code carefully initialize certain convolutions with zeros init. Do you have any reference for this kind of design decision?

Thanks!

FutureXiang commented 1 year ago

Hi, I am also confused about the weight initialization in different implementations.

Each implementation has its own initialization style

In the official DDPM repo, the convs before residual connections and the final conv are initialized with zeros, while other convs are initialized with zero-mean uniform distributions. In the ADM guided-diffusion repo, the convs before residual connections and the final conv are also initialized with zeros, while others are initialized by PyTorch default. In the Score-Based SDE repo, the implementation covers both DDPM/NCSN style initialization. In this repo, I think it's similar to the Score-Based SDE, but it's still different to the three codebase mentioned above.

My experiments and observations

Recently, I tried to train diffusion models (DDPM, DDIM, EDM, ...) with the original basic UNet (35.7M #params) on CIFAR-10. Here are some observations:

Seemingly, the mathematical diffusion model (training + sampler) can be decoupled as an individual component. But the neural network model (and its initialization) may be strongly coupled with the hyper-parameters (?).

I wonder if it is really the case, and why the initialization / hyper-parameter matters a lot.