Closed Lialothedestroyer closed 10 months ago
Did you run pip with virtual environment activated?
I actually got it work in latest master commit by changing requirements_onnx.txt to this below and delete the venv folder
transformers
accelerate>=0.20.3
diffusers
onnx
invisible-watermark
optimum
But again im facing another issue with --olive --backend directml --autolaunch
You can find a solution in FAQ. #149
If you want to run Olive, pip install torch==1.13.1 torchvision==0.14.1 torch-directml==0.1.13.1.dev230413
.
Changing the requirements_onnx.txt with "accelerate>=0.20.3" seems to have fixed it. I had already done pip install torch==1.13.1 torchvision==0.14.1 torch-directml==0.1.13.1.dev230413
prior, and now it launches every time without fail.
Fixed in 45d7cc11987f266b45072616e981c9c79423f5e8
Is there an existing issue for this?
What happened?
Attempted to run with --onnx argument in order to run an olive optimized model, instead get an import error for accelerate.
Steps to reproduce the problem
What should have happened?
The program should have launched with the ability to use ONNX models.
Version or Commit where the problem happens
1.5.1
What Python version are you running on ?
Python 3.10.x
What platforms do you use to access the UI ?
Windows
What device are you running WebUI on?
AMD GPUs
Cross attention optimization
Automatic
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
List of extensions
No
Console logs
Additional information
Still occurs even after running pip install -u transformers and pip install -u accelerate in an attempt to update the version, everything shows accelerate at version 0.21.0.