This PR integrates LoRA optimization into the Stable Diffusion training example, building upon the already implemented distillation benefits. By applying LoRA-enhanced distillation, we achieve further improvements, including reduced inference time, minimized memory overhead, and a notable 50% decrease in memory consumption prior to distillation. The enhancements lead to significantly quicker inference times and major memory usage reductions.
Our analysis of the produced images confirms that LoRA-enhanced distillation preserves image quality and fidelity to the prompts. For more detailed insights, refer to the published document here: LoRA-Enhanced Distillation on Guided Diffusion Models
Besides the code, the README file is updated accordingly.
This PR integrates LoRA optimization into the Stable Diffusion training example, building upon the already implemented distillation benefits. By applying LoRA-enhanced distillation, we achieve further improvements, including reduced inference time, minimized memory overhead, and a notable 50% decrease in memory consumption prior to distillation. The enhancements lead to significantly quicker inference times and major memory usage reductions.
Our analysis of the produced images confirms that LoRA-enhanced distillation preserves image quality and fidelity to the prompts. For more detailed insights, refer to the published document here: LoRA-Enhanced Distillation on Guided Diffusion Models Besides the code, the README file is updated accordingly.