Open nicolas-dufour opened 1 year ago
Hi, I am also confused about the weight initialization in different implementations.
In the official DDPM repo, the convs before residual connections and the final conv are initialized with zeros, while other convs are initialized with zero-mean uniform distributions. In the ADM guided-diffusion repo, the convs before residual connections and the final conv are also initialized with zeros, while others are initialized by PyTorch default. In the Score-Based SDE repo, the implementation covers both DDPM/NCSN style initialization. In this repo, I think it's similar to the Score-Based SDE, but it's still different to the three codebase mentioned above.
Recently, I tried to train diffusion models (DDPM, DDIM, EDM, ...) with the original basic UNet (35.7M #params) on CIFAR-10. Here are some observations:
1e-4
vs 2e-4
). When I tried the official one (2e-4
), the FID result got far worse.10e-4
learning rate, the FID result got far worse. To confirm it, I replace the networks.py with mine and run with the official EDM code, the FID is still bad.Seemingly, the mathematical diffusion model (training + sampler) can be decoupled as an individual component. But the neural network model (and its initialization) may be strongly coupled with the hyper-parameters (?).
I wonder if it is really the case, and why the initialization / hyper-parameter matters a lot.
Hi, I have observed that the code carefully initialize certain convolutions with zeros init. Do you have any reference for this kind of design decision?
Thanks!