huggingface / diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
https://huggingface.co/docs/diffusers
Apache License 2.0
25.5k stars 5.28k forks source link

diffusers version update to 0.27.0 from 0.20.0, training code seems not work #9575

Open huangjun12 opened 1 week ago

huangjun12 commented 1 week ago

I have trained an inpainting model using diffusers 0.20.0. The trained model works as expected. However, something seems wrong when I update the diffusers version to 0.27.0, while keeping the training code and other requirements the same. The training code runs successfully, but the inference outputs look like noise. Is there any point that should be noticed in this case?

a-r-r-o-w commented 1 week ago

Which training script are you using? Is this training script custom, or is it available in the Diffusers examples/ folder? If it is in diffusers without custom modification, it would make it easier to look at the history of commits to find out what changed between those versions. Have you ensured your training parameters are the same?

Have you also ensured that the inference parameters between both versions is the same? That is, if there haven't been changes to the inpainting pipeline in question for inference, the parameters are the exact same, and if there are changes to the inpainting pipelines between these two versions, you've adjusted the inference parameters to suit the same.

Can you provide more details about your environment?

huangjun12 commented 1 week ago

A custom training script with reference to StableDiffusionXLInpaintPipeline implementation is used. All the code and environment remain the same except for the diffusers version.

# environment
torch==2.0.1+cu117
datasets==3.0.0
accelerate==0.30.1
transformers==4.44.2
diffusers==0.27.0  # origin: 0.20.0
bitsandbytes==0.43.3
a-r-r-o-w commented 1 week ago

Do you see noisy outputs if you try diffusers==0.26.0 or diffusers==0.25.0? Could you try finding the first release after which you get bad outputs? This way, it will be easier to see if a change in the pipelines caused an issue