lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.67k stars 174 forks source link

[Bug]: When working with Onnx/Olive -> TypeError: 'OnnxRawPipeline' object is not callable #453

Closed Gonzalo1987 closed 1 month ago

Gonzalo1987 commented 2 months ago

Checklist

What happened?

I can't generate images using Olive since the February refactoring. I've been waiting for it to be fixed in a commit but it doesn't seem to happen.

I've some people with the same bug in: https://github.com/lshqqytiger/stable-diffusion-webui-directml/discussions/149 https://github.com/lshqqytiger/stable-diffusion-webui-directml/discussions/149#discussioncomment-8642882

I can't use Olive since the refactor of feb. I was generating images with Olive at ~8 it/s and I'm now without olive at ~1.15 it/s.

I can do any tests that are necessary to try to correct the problem.

Steps to reproduce the problem

  1. git clone https://github.com/lshqqytiger/stable-diffusion-webui-directml 2 cd ./stable-diffusion-webui-directml
  2. Start webui. [with --use-directml]
  3. Go to ONNX Runtime tab in Settings.
  4. Check Use ONNX Runtime instead of PyTorch implementation.
  5. Change Execution Provider to "DmlExecutionProvider"
  6. Check Enable Olive.
  7. Check everything under Olive models to process. --
  8. .\venv\Scripts\activate
  9. pip uninstall torch torchvision torch-directml -y
  10. pip install onnxruntime-directml
  11. Start webui [with --use-directml]
  12. Check DmlExecutionProvider
  13. Go to txt2img tab and generate.

What should have happened?

It should generate the images using Olive.

What browsers do you use to access the UI ?

Mozilla Firefox

Sysinfo

sysinfo-2024-04-24-18-57.json

Console logs

https://pastebin.com/MNQzzD3Y  -- first run
https://pastebin.com/Y2c23Dfw  -- reload ui
https://pastebin.com/VysnbEFS  -- with extra instructions

Additional information

It worked with the Olive UI some months ago.

FauxPrada commented 2 months ago

I don't think you need --use-directml when using ONNX / OLive, here is what I use:

--skip-torch-cuda-test --use-cpu-torch --opt-sdp-attention --disable-nan-check

You might need to remove any caches as well, like the venv cache and ~/.cache

Gonzalo1987 commented 2 months ago

I don't think you need --use-directml when using ONNX / OLive, here is what I use:

--skip-torch-cuda-test --use-cpu-torch --opt-sdp-attention --disable-nan-check

You might need to remove any caches as well, like the venv cache and ~/.cache

sadly I got the same error (OnnxRawPipeline is not callable) :(

"--use-cpu-torch" is for use the cpu instead the gpu it isn't?

Thanks!

lshqqytiger commented 1 month ago

Please pull the latest commits. You should have the latest stable release of pytorch.

.\venv\Scripts\activate
pip uninstall torch torchvision torch-directml onnxruntime onnxruntime-directml -y
pip install torch torchvision onnxruntime
pip install onnxruntime-directml

Then, run webui with --use-cpu-torch.

Gonzalo1987 commented 1 month ago

It's working like a charm, thank you very much for your efforts!!