huggingface / diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
https://huggingface.co/docs/diffusers
Apache License 2.0
26.46k stars 5.45k forks source link

Diffusers model not working as good as repo ckpt model #9140

Open kunalkathare opened 3 months ago

kunalkathare commented 3 months ago

Hi, When I try to run the models stable diffusion v1-5 or Instructpix2pix through the diffusers pipeline and use .from_pretrained() it downloads the models from hugging face and I'm using the code to run inference given in hugging face, the results are not good at all in the sense that there is still noise in the generated images.

But when I run these models using their GitHub repo code and ckpt models given by them the outputs are very good.

Is there any solution to this or any other way to use the diffusers library pipeline.

Also the diffusers.StableDiffusionInstructPix2PixPipeline does not have .from_single_file() option.

Thank you

github-actions[bot] commented 2 months ago

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

a-r-r-o-w commented 2 weeks ago

cc @sayakpaul

asomoza commented 2 weeks ago

Hi, we need a minimal snippet of the code that can reproduce the issue and probably some demo images showing the problem. There shouldn't be any difference between the original one and diffusers one, I've used a lot the SD 1,5 before and there wasn't any difference between them so this is probably an issue with the generation parameters.

sayakpaul commented 2 weeks ago

from_single_file() is a separate thing and we really haven't had enough issues about that from the community. So, I am not sure if it's worth adding. Cc: @DN6