huggingface / diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
https://huggingface.co/docs/diffusers
Apache License 2.0
25.83k stars 5.32k forks source link

Fine tuning SDXL inpainting #4680

Closed EnricoBeltramo closed 12 months ago

EnricoBeltramo commented 1 year ago

Is your feature request related to a problem? Please describe. Training with LORA of SDXL refiner for inpainting

Describe the solution you'd like Is possibile to modify the solution proposed in: https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sdxl.md in order to train refiners model too (i.e. for fine tune inpainting models)?

sayakpaul commented 1 year ago

How common is fine-tuning the refiner? I am yet to see some promising results from that. Plus it introduces a two-stage training pipeline I think:

Might be a complicated workflow.

patrickvonplaten commented 1 year ago

@williamberman is working on it I believe

williamberman commented 1 year ago

Yep! inpainting for sdxl is a wip here https://github.com/huggingface/diffusers/pull/4746

abdellah-lamrani-alaoui commented 1 year ago

Thanks a lot for the PR, will be super useful. Quick question, do you plan on training and releasing a SDXL model fine-tuned for the inpainting task?

williamberman commented 1 year ago

Yeah we plan on releasing an inpainting fine-tuned sdxl :) The current objective is more to have a working checkpoint than a reproducible training script

abdellah-lamrani-alaoui commented 1 year ago

That sounds great! @williamberman Do you have an ETA for releasing the model?

williamberman commented 1 year ago

@abdellah-lamrani-alaoui https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1 !

GitHub1712 commented 1 year ago

Great to see this new inpaint model, could we train it ourself?

GitHub1712 commented 1 year ago

@abdellah-lamrani-alaoui https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1 !

How did you made it? Is there any training code or example?

github-actions[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

lvZic commented 11 months ago

How did you made it? Is there any training code or example?

How did you made it? Is there any training code or example?

lawsonxwl commented 9 months ago

@abdellah-lamrani-alaoui https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1 !

could you release the training code?

crapthings commented 7 months ago

Are there any scripts available for training today?

sayakpaul commented 7 months ago

https://github.com/huggingface/diffusers/pull/6922

crapthings commented 7 months ago

6922

This looks really good! Can it be used to train outpainting? For example, inverting the mask or masking out the entire background to keep only certain content.

tingxueronghua commented 7 months ago

6922

Could this reproduce similar performances?

tingxueronghua commented 7 months ago

6922

And is there any scripts for sd-xl-inpainting?

sayakpaul commented 7 months ago

Cc: @patil-suraj

hzphzp commented 4 months ago

same question