Closed HackdaNorth closed 7 months ago
I believe the fix to be applied within the dev branch. I am reattempting with dev branch.
I recently found this discord post within support-forum outlining the fix, my apologies.
Dev branch solved the issue. Had to also untick "Full quality" in advanced tab, other than that fully functional.
Issue Description
On fresh install of latest branch main, I am encountering issues with ONNX Runtime & Olive implementation.
I have attempted many different things, I have used dreamshaper_8, and a few others same result. This was working about a week and a half ago, It recently stopped when I attempted to upgrade torchvision itself using
pip install torch torchvision --upgrade
and failed, so I reinstalled and get the same error each time.Clone fresh install from main branch https://github.com/vladmandic/automatic.git
git clone https://github.com/vladmandic/automatic.git
Start with
webui.bat --debug
wait for it to boot, then followed #first-time-setup, then shut down server,
go to console and apply
.\venv\Scripts\activate
pip uninstall torch-directml
pip install torch torchvision --upgrade
pip install onnxruntime-directml
.\webui.bat
On first boot I followed ONNX-Runtime-&-Olive and followed,
Change Execution backend to diffusers and Diffusers pipeline to ONNX Stable Diffusion on the System tab.
then set Execution Provider toDmlExecutionProvider
and follow and tick these
Go to System tab → Compute Settings.
Select Model, Text Encoder and VAE in Compile Model.
Set Model compile backend to olive-ai.
Apply and Shutdown SDNext.
Copy dreamshaper_631vaebaked.safetensors into stable diffusion directory.
attempt image Txt2Img generation
ERROR Failed to load diffusers model
Version Platform Description
Relevant log output
FULL LOG
Cont of Log during image generation.
Backend
Diffusers
Branch
Master
Model
SD 1.5
Acknowledgements