Since I want to customize a model that can perform StableDiffusionControlNetInpaintMixingPipeline on a specific dataset, it would be much appreciated even you can upload a rough training file without modifying the code w.r.t. the LAION dataset, such that I can quickly get the main idea about the holistic structure of your training pipeline as the preprocessing part for the dataset should be much easier to be debugged from my end :)
Thanks for your great project!
May I know whether we can enable multi-gpu inference? And do you train your inpainting model in multi-gpu?
Could you also share an example training file that can accommodate to your class,
StableDiffusionControlNetInpaintMixingPipeline
?Is the training pipeline similar to the one in the
diffuser
tutorial for training the controlnet https://huggingface.co/docs/diffusers/training/controlnet ?Since I want to customize a model that can perform
StableDiffusionControlNetInpaintMixingPipeline
on a specific dataset, it would be much appreciated even you can upload a rough training file without modifying the code w.r.t. the LAION dataset, such that I can quickly get the main idea about the holistic structure of your training pipeline as the preprocessing part for the dataset should be much easier to be debugged from my end :)