LAION-AI / ldm-finetune

Home of `erlich` and `ongo`. Finetune latent-diffusion/glid-3-xl text2image on your own data.
MIT License
182 stars 19 forks source link

--num_batches missing argument #20

Open wes-kay opened 2 years ago

wes-kay commented 2 years ago

The great thing about the new update is I'm able to spin up a free google collab to run ongo, and elrich with 2 batches., this is great. But having to constantly iterate, load Bert, kl-f8, and the models each time takes a while. Is there any memory performance increase if we could add back the --num_batches?