Tencent / HunyuanDiT

Hunyuan-DiT : A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding
https://dit.hunyuan.tencent.com/
Other
2.59k stars 180 forks source link

The distill_v1.1 model has same latency of the ema ckpt on A100 #120

Open wzds2015 opened 4 days ago

wzds2015 commented 4 days ago

I checked the log and the pytorch_model_distill.pt is picked in processing. But the latency is same as the ema ckpt: 51s on A100. Is this normal? Is there any argument I haven't set correctly to unlock efficient model running?

The quality is a little lower than the ema ckpt. I guess it is a distilled model. But why the latency keeps the same?

I download ckpt from here: https://hf-mirror.com/Tencent-Hunyuan/Distillation-v1.1/tree/main