Closed omrastogi closed 3 months ago
thank you for the interest. But I don't get what results you can't reproduce? quality / speed or other things? Could you specify?
Please note that this repo is not a code/model release for dmd 1. The sdv1.5 model in dmd2 is trained with guidance 1.75 and it is not supposed to give good images. For higher image quality, you should try the sdxl
Thank you for the clarification. I had assumed that the sdv1.5 would match or surpass the results mentioned in dmd1, as this is an improvement over it.
I am looking for a one-step sdv1.5, that could give good quality images.
I mean it is an improvement but we are not comparing at the exact same setting (they differ in terms of guidance scale used during the training). unfortunately dmd2 repo doesn't have a one-step sdv1.5 that supports high image quality. To get this, we need to train a model with higher guidance scale. We didn't try this as sdxl is just way better in that high guidance regime
Got it, thanks for making it clear.
Oh, btw this is amazing work @tianweiy
I am unable to replicate the quality and speed of DMD sdv1.5. I need help setting the outputs for the corresponding scores mentioned in the paper.
Table 4 of "One-step Diffusion with Distribution Matching Distillation"
Settings:
GPU: Quadro RTX 8000 python: 3.8.19 cuda: 11.7 torch: 2.0.1
Checkpoint downloaded:
"sdv1.5/laion6.25_sd_baseline_8node_guidance1.75_lr1e-5_seed10_dfake10_from_scratch_fid9.28_checkpoint_model_039000"
Code
Output in 0.7 seconds:
For comparison, Lora LCM sdv1.5 4 steps, takes 0.6 seconds in the same settings.