crowsonkb / k-diffusion

Karras et al. (2022) diffusion models for PyTorch
MIT License
2.21k stars 371 forks source link

R2 setup in the paper #96

Closed nicolas-dufour closed 5 months ago

nicolas-dufour commented 5 months ago

Hi,

I'm not sure i get from the hourglass paper what does the R2 trainer setup corresponds to. What is changed versus R1? And since it underperform, why is it kept for R3 and R4? Wouldn't R1 work better?

Thanks!

stefan-baumann commented 5 months ago

R1 uses the original DiT model with the original trainer, as in the official implementation, with just the VAE removed and normalization adjusted. This setup has a bunch of important differences to the trainer setup used for HDiT (discrete vs. continuous time, sigma distribution, ...). To enable the most fair comparison possible, these should ideally be matched, so we implemented DiT in the same trainer as HDiT, simply putting one of Kat's discrete-time DDPM wrappers around the original implementation and training with the same settings as HDiT, yielding R2. That, however, resulted in a substantially worse model (probably due to multiple reasons compounding - the adapting to continuous time while still keeping the conditioning network that expects discrete time, mismatched hyperparameters, ...). So we also added a version that uses all the same transformer blocks and conditioning network ("mapping network") as HDiT but matches the structure (blocks, width, depth, ...) of DiT exactly, yielding R3. As this performs basically equal to the original DiT, it's reasonable to assume that R2 performed badly due to a mismatch in network vs. training setup, with R3 enabling comparisons w.r.t. model structure. Finally, having R3 enables training DiT for a fair comparison with Soft-Min-SNR (as there's complex interactions between it and the EDM preconditioner, sigma sampling schedules, ..., which having a base setup that's matched regarding them with R3 vs hour HDiT ablations makes them substantially easier to compare), yielding R4.

nicolas-dufour commented 5 months ago

Ok so if i get it right the trainer used in HDiT uses EDM instead of DDPM parametrization? Is there other changes? Is there a section in the paper that talks about what is changed in the HDiT trainer?

Thanks!

stefan-baumann commented 5 months ago

HDiT just started with the standard k-diffusion setup training; there weren't deliberate deviations from what DiT used. Right now, the paper doesn't have an exhaustive list of the differences, no. The important differences are the EDM preconditioner, continuous vs. discrete time, the way sigmas are sampled and handled during training, and the sampler during inference. You can also find all the important hyperparameters we used in table 6 in the appendix:

image
nicolas-dufour commented 5 months ago

Thanks for the clarifications!