Closed TimmekHW closed 10 months ago
venv folder deleted several times. did not help (
I also changed it through ui-config, I still use InvokeAI (slow)
All existing optimization methods are meaningless with --olive
.
The log will say you are using InvokeAI, but you are actually using the default attention processor of OnnxStableDiffusionPipeline
.
I will add a attention processor option in settings.
Is there an existing issue for this?
What happened?
I can't change Cross attention optimization. In 1111 and the DirectML branch, without Olive, "spd-mem" is the fastest. And with --onnx or --olive I can't change this value.
Steps to reproduce the problem
What should have happened?
I can't change Cross attention optimization. In 1111 and the DirectML branch, without Olive, "spd-mem" is the fastest. And with --onnx or --olive I can't change this value.
Version or Commit where the problem happens
Version: 1.5.2
What Python version are you running on ?
Python 3.10.x
What platforms do you use to access the UI ?
Windows
What device are you running WebUI on?
NVIDIA GPUs, AMD GPUs
Cross attention optimization
Automatic
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
List of extensions
No
Console logs
Additional information
No response