PixArt-alpha / PixArt-sigma

PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation
https://pixart-alpha.github.io/PixArt-sigma-project/
GNU Affero General Public License v3.0
1.44k stars 67 forks source link

LoRA training from local models #100

Open GavChap opened 1 month ago

GavChap commented 1 month ago

Hi, is there a way to lora train without having to download T5 and the base model from huggingface again? I don't need 3-4 copies of T5 on my computer

tomudo commented 2 weeks ago

Please add better support for training.

lawrence-cj commented 1 week ago

You can use the cache function of diffusers pipeline: https://huggingface.co/docs/diffusers/main/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained.cache_dir