huggingface / diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
https://huggingface.co/docs/diffusers
Apache License 2.0
26.45k stars 5.45k forks source link

TypeError: OnnxStableDiffusionPipeline.__init__() missing 1 required positional argument: 'vae_encoder' #1335

Closed kamalasubha closed 1 year ago

kamalasubha commented 2 years ago

Describe the bug

Hi, I tried ONNX Runtime for inference. The code is, from diffusers import StableDiffusionOnnxPipeline pipe = StableDiffusionOnnxPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4", revision="onnx", provider="CPUExecutionProvider", use_auth_token=True, ) prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0]

Error is, Traceback (most recent call last): File "Python-work\stable_diffuser_onnx_compvix.py", line 3, in pipe = StableDiffusionOnnxPipeline.from_pretrained( File "onnx-virtual\lib\site-packages\diffusers\pipeline_utils.py", line 647, in from_pretrained model = pipeline_class(init_kwargs) File "onnx-virtual\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_onnx_stable_diffusion.py", line 272, in init super().init( TypeError: OnnxStableDiffusionPipeline.init() missing 1 required positional argument: 'vae_encoder'**

Kindly help me to fix this error

Reproduction

I tried this with virtual environment and python 3.10

Logs

No response

System Info

Environment is windows, python 3.10

averad commented 2 years ago

I use the Onnx pipeline with an AMD Card following the steps here

I replaced provider="DmlExecutionProvider" with provider="CPUExecutionProvider" and was able to generate an image using my CPU.

Note: You will need to go to https://huggingface.co/CompVis/stable-diffusion-v1-4 and accept the model use terms before attempting to use the model or download it.

MCRusher commented 2 years ago

Yeah I had to give it explicitly with

encoder = OnnxRuntimeModel.from_pretrained(model / "vae_encoder", provider=provider, sess_options=so)

Not sure why it doesn't automatically pull it like it does the other ones.

averad commented 2 years ago

If you have time can you run diffusers-cli env and post the output

Example Output:

kamalasubha commented 2 years ago

Hi, Please find the output

averad commented 2 years ago

@kamalasubha glad to hear you have a workaround. I was unable to duplicate the reported issue. It's possible the model was an older copy, or it didn't correctly download the model_index.json.

Here are the steps I completed to attempt to replicate the issue (All commands entered in Windows CMD prompt):

  1. pip install virtualenv
  2. python -m venv sd_env
  3. sd_env\scripts\activate
  4. python -m pip install --upgrade pip
  5. pip install diffusers transformers onnxruntime onnx torch ftfy spacy scipy
  6. Accepted the Model License Agreement
  7. Ran the following code to generate an image:
from diffusers import OnnxStableDiffusionPipeline
height=512
width=512
num_inference_steps=50
guidance_scale=7.5
prompt = "a photo of an astronaut riding a horse on mars"
negative_prompt="bad hands, blurry"
pipe = OnnxStableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", revision="onnx", provider="CPUExecutionProvider")
image = pipe(prompt, height, width, num_inference_steps, guidance_scale, negative_prompt).images[0] 
image.save("astronaut_rides_horse.png")

output:

System Info:

patrickvonplaten commented 2 years ago

cc @anton-l

Justanetizen commented 2 years ago

I have the same problem if I try to use the .ckpt file downloaded from wd-v1-3-full.ckpt converted to onnx with this script Convert Original Stable Diffusion to Diffusers script HOWEVER if I use this script Convert Stable Diffusion Checkpoint to Onnx script and run e.g. python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="hakurei/waifu-diffusion" --output_path="./waifu_diffusion_onnx" then it works perfectly.

I noticed that the first script only creates a single folder "vae" while the second script creates 2 separate folders "vae_encoder" and "vae_decoder"

averad commented 2 years ago

@Justanetizen Convert Original Stable Diffusion to Diffusers script doesn't convert ckpt files to Onnx.

To covert a ckpt file to onnx:

  1. Run Convert Original Stable Diffusion to Diffusers script on the model ckpt file.
  2. Run Convert Stable Diffusion Checkpoint to Onnx script on the resulting diffusers model folder.

Example (eldenring-v2-pruned.ckpt):

python convert_original_stable_diffusion_to_diffusers.py --checkpoint_path="./eldenring-v2-pruned.ckpt" --dump_path="./eldenring_v2_pruned_diffusers"
python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="./eldenring_v2_pruned_diffusers" --output_path="./eldenring_v2_pruned_onnx"
GreenLandisaLie commented 2 years ago

StableDiffusionOnnxPipeline is deprecated, you should use OnnxStableDiffusionPipeline instead. Also I'm unable to replicate this issue using an updated version of diffusers. Maybe this has already been solved? It might also be worth trying to delete the model cache stored in ...{USER}\.cache\huggingface\diffusers\models--CompVis--stable-diffusion-v1-4 and re run the script to grab the whole thing again - vae_encoder was not an argument in the earlier versions of the onnx pipeline (I think) so it might be possible that the pipe doesn't try to download a missing vae_encoder - which would indeed be a compatibility issue bug.

kamalasubha commented 2 years ago

@averad I am able to replicate the same steps and able to generate the image. Thanks But, every time the model is getting downloaded. In order to avoid it, I downloaded the model and convert to onnx type as per the comments given by you.

python convert_original_stable_diffusion_to_diffusers.py --checkpoint_path="./eldenring-v2-pruned.ckpt" --dump_path="./eldenring_v2_pruned_diffusers"
python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="./eldenring_v2_pruned_diffusers" --output_path="./eldenring_v2_pruned_onnx"

When I am running with the above code, pipe = OnnxStableDiffusionPipeline.from_pretrained("./model_onnx", revision="onnx", provider="CPUExecutionProvider") It throws me the error like,

return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: (tensor(int64)) , expected: (tensor(float))

@averad @GreenLandisaLie @Justanetizen Can you please help on the same?

GreenLandisaLie commented 2 years ago

@kamalasubha Don't use revision="onnx" - you are already calling from an onnx pipeline! And also you might want to install ort nightly directml and use DmlExecutionProvider instead of CPUExecutionProvider - it works on my RX 560 4G although it uses RAM as shared memory (which slows things quite a bit) to compensate the lack of VRAM - still 5 times faster than my CPU. If your GPU has 4G or more then you def should use DmlExecutionProvider. Take a look at this issue I opened as it contains the link for it as well as a fix to a possible problem you might encounter (so far I seem to be the only one though) - don't forget you must grab the version specific for your python version.

EDIT: Forgot to ask - what scheduler are you using? I find that the default one doesn't work well for custom models. DDIM works great for them. For reference here is how I'm doing:

from diffusers import DDIMScheduler, LMSDiscreteScheduler, EulerAncestralDiscreteScheduler

DDIM = DDIMScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000, clip_sample=False, set_alpha_to_one=False)
LMSD = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
EULER_A = EulerAncestralDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)

Then you assign them like so: pipe.scheduler = DDIM

EDIT2: Somehow noticed this right before I was about to leave: you are trying to load the model 'model_onnx' even though when you converted it was 'eldenring_v2_pruned_onnx' - is that a mistake ?

anton-l commented 2 years ago

@kamalasubha this issue should be fixed in diffusers>=0.8. Try installing the latest version, then this will work:

from diffusers import StableDiffusionOnnxPipeline 

pipe = StableDiffusionOnnxPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", revision="onnx", provider="CPUExecutionProvider")