sail-sg / EditAnything

Edit anything in images powered by segment-anything, ControlNet, StableDiffusion, etc. (ACM MM)
Apache License 2.0
3.34k stars 193 forks source link

Multi-GPUs training/inference & Training of `StableDiffusionControlNetInpaintMixingPipeline` #33

Open wuyujack opened 1 year ago

wuyujack commented 1 year ago

Thanks for your great project!

May I know whether we can enable multi-gpu inference? And do you train your inpainting model in multi-gpu?

Could you also share an example training file that can accommodate to your class, StableDiffusionControlNetInpaintMixingPipeline?

Is the training pipeline similar to the one in the diffuser tutorial for training the controlnet https://huggingface.co/docs/diffusers/training/controlnet ?

Since I want to customize a model that can perform StableDiffusionControlNetInpaintMixingPipeline on a specific dataset, it would be much appreciated even you can upload a rough training file without modifying the code w.r.t. the LAION dataset, such that I can quickly get the main idea about the holistic structure of your training pipeline as the preprocessing part for the dataset should be much easier to be debugged from my end :)

ikuinen commented 1 year ago

A original script can be found in sam_train_sd21.py. Also, the training pipeline for diffuser should be similar to https://huggingface.co/docs/diffusers/training/controlnet.