Closed Gonzalo1987 closed 1 month ago
I don't think you need --use-directml
when using ONNX / OLive, here is what I use:
--skip-torch-cuda-test --use-cpu-torch --opt-sdp-attention --disable-nan-check
You might need to remove any caches as well, like the venv cache and ~/.cache
I don't think you need
--use-directml
when using ONNX / OLive, here is what I use:
--skip-torch-cuda-test --use-cpu-torch --opt-sdp-attention --disable-nan-check
You might need to remove any caches as well, like the venv cache and ~/.cache
sadly I got the same error (OnnxRawPipeline is not callable) :(
"--use-cpu-torch" is for use the cpu instead the gpu it isn't?
Thanks!
Please pull the latest commits. You should have the latest stable release of pytorch.
.\venv\Scripts\activate
pip uninstall torch torchvision torch-directml onnxruntime onnxruntime-directml -y
pip install torch torchvision onnxruntime
pip install onnxruntime-directml
Then, run webui with --use-cpu-torch
.
It's working like a charm, thank you very much for your efforts!!
Checklist
What happened?
I can't generate images using Olive since the February refactoring. I've been waiting for it to be fixed in a commit but it doesn't seem to happen.
I've some people with the same bug in: https://github.com/lshqqytiger/stable-diffusion-webui-directml/discussions/149 https://github.com/lshqqytiger/stable-diffusion-webui-directml/discussions/149#discussioncomment-8642882
I can't use Olive since the refactor of feb. I was generating images with Olive at ~8 it/s and I'm now without olive at ~1.15 it/s.
I can do any tests that are necessary to try to correct the problem.
Steps to reproduce the problem
What should have happened?
It should generate the images using Olive.
What browsers do you use to access the UI ?
Mozilla Firefox
Sysinfo
sysinfo-2024-04-24-18-57.json
Console logs
Additional information
It worked with the Olive UI some months ago.