openvinotoolkit / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
247 stars 39 forks source link

[Bug]: Gives lots of errors when I try and generate an image #62

Closed pjsg closed 8 months ago

pjsg commented 8 months ago

Is there an existing issue for this?

What happened?

I followed the installation instructions to install the openvino flavor for the webui on a clean Ubuntu 22.03 LTS system (I used a clean server install to make sure that it wasn't any incompatibility between something I had installed before). This is an 8-core Intel box with 32GB RAM on it.

The only change that I made was to add --listen to the command line options so I could access the server remotely.

When I try and generate an image, I get lots of the following error reported on the terminal where I started the webui process:

[2023-10-19 01:44:39,199] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7fc0c9fa3490> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments

This looks like some type of version skew between something and something else -- but I don't really know what to even start looking at. My generation webpage is show below.

image image

Steps to reproduce the problem

  1. Install a clean Ubuntu 22.04 server
  2. Install following https://github.com/openvinotoolkit/stable-diffusion-webui/wiki/Installation-on-Intel-Silicon
  3. Enable the accelerate with openvino and pick v1-inference.yaml
  4. Try and generate a 'beach sunset' image.

What should have happened?

I should have got a plausible image.

Sysinfo

sysinfo-2023-10-19-01-55.txt

What browsers do you use to access the UI ?

Google Chrome

Console logs

(sd_env) philip@jupiter:~$ ./runui.sh

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################

################################################################
Running on philip user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
python venv already activate or run without venv: /home/philip/sd_env
################################################################

################################################################
Launching launch.py...
################################################################
Cannot locate TCMalloc (improves CPU memory usage)
fatal: No names found, cannot describe anything.
Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
Version: 1.6.0
Commit hash: 5d2f2d566a59cea66415d5819cf81e3a41d899bb
Launching Web UI with arguments: --skip-torch-cuda-test --precision full --no-half --listen
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
Loading weights [6ce0161689] from /mnt/philip/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Running on local URL:  http://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 21.1s (prepare environment: 0.3s, import torch: 9.8s, import gradio: 1.6s, setup paths: 1.3s, initialize shared: 0.1s, other imports: 2.1s, setup codeformer: 0.3s, load scripts: 3.8s, create ui: 1.5s, gradio launch: 0.5s).
Creating model from config: /mnt/philip/stable-diffusion-webui/configs/v1-inference.yaml
Applying attention optimization: InvokeAI... done.
Model loaded in 10.7s (load weights from disk: 2.9s, create model: 1.6s, apply weights to model: 5.8s, calculate empty prompt: 0.2s).
[W NNPACK.cpp:64] Could not initialize NNPACK! Reason: Unsupported hardware.
Loading weights [6ce0161689] from /mnt/philip/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
OpenVINO Script:  created model from config : /mnt/philip/stable-diffusion-webui/configs/v1-inference.yaml
/home/philip/sd_env/lib/python3.10/site-packages/transformers/models/clip/feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
  warnings.warn(
  0%|                                                                                                                                                             | 0/20 [00:00<?, ?it/s][2023-10-19 01:44:39,199] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7fc0c9fa3490> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-19 01:44:39,572] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/conv.py <function Conv2d.forward at 0x7fc0c9fbca60> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-19 01:44:40,042] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7fc0c9fa3490> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-19 01:44:40,306] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7fc0c9fa3490> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-19 01:44:40,902] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7fc0c9fa3490> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-19 01:44:41,070] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7fc0c9fa3490> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-19 01:44:45,023] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7fc0c9fa3490> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-19 01:44:45,288] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7fc0c9fa3490> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-19 01:44:45,365] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7fc0c9fa3490> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-19 01:44:48,715] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7fc0c9fa3490> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-19 01:44:48,939] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7fc0c9fa3490> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-19 01:44:49,017] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7fc0c9fa3490> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-19 01:44:50,683] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7fc0c9fa3490> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-19 01:44:50,982] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7fc0c9fa3490> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-19 01:44:51,055] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7fc0c9fa3490> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-19 01:44:52,667] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7fc0c9fa3490> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-19 01:44:52,955] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7fc0c9fa3490> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-19 01:44:53,029] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7fc0c9fa3490> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments

Additional information

No response

qiacheng commented 8 months ago

Hi @pjsg, could you check the torch, torchvision, and openvino version in sd_env with pip list?

pjsg commented 8 months ago

@qiacheng Here you go:

openvino==2023.1.0.dev20230811
torch==2.0.1+cu118
torchvision==0.15.2+cu118

And all of them....

absl-py==2.0.0
accelerate==0.21.0
addict==2.4.0
aenum==3.1.15
aiofiles==23.2.1
aiohttp==3.8.6
aiosignal==1.3.1
altair==5.1.2
antlr4-python3-runtime==4.9.3
anyio==3.7.1
async-timeout==4.0.3
attrs==23.1.0
basicsr==1.4.2
beautifulsoup4==4.12.2
blendmodes==2022
boltons==23.0.0
cachetools==5.3.1
certifi==2023.7.22
charset-normalizer==3.3.0
clean-fid==0.1.35
click==8.1.7
clip==1.0
cmake==3.27.7
contourpy==1.1.1
cycler==0.12.1
deprecation==2.1.0
diffusers==0.21.1
einops==0.4.1
exceptiongroup==1.1.3
facexlib==0.3.0
fastapi==0.94.0
ffmpy==0.3.1
filelock==3.12.4
filterpy==1.4.5
fonttools==4.43.1
frozenlist==1.4.0
fsspec==2023.9.2
ftfy==6.1.1
future==0.18.3
gdown==4.7.1
gfpgan==1.3.8
gitdb==4.0.10
GitPython==3.1.34
google-auth==2.23.3
google-auth-oauthlib==1.1.0
gradio==3.41.2
gradio_client==0.5.0
grpcio==1.59.0
h11==0.12.0
httpcore==0.15.0
httpx==0.24.1
huggingface-hub==0.18.0
idna==3.4
imageio==2.31.5
importlib-metadata==6.8.0
importlib-resources==6.1.0
inflection==0.5.1
invisible-watermark==0.2.0
Jinja2==3.1.2
jsonmerge==1.8.0
jsonschema==4.19.1
jsonschema-specifications==2023.7.1
kiwisolver==1.4.5
kornia==0.6.7
lark==1.1.2
lazy_loader==0.3
lightning-utilities==0.9.0
lit==17.0.2
llvmlite==0.41.0
lmdb==1.4.1
lpips==0.1.4
Markdown==3.5
MarkupSafe==2.1.3
matplotlib==3.8.0
mpmath==1.3.0
multidict==6.0.4
networkx==3.1
numba==0.58.0
numpy==1.23.5
oauthlib==3.2.2
omegaconf==2.2.3
open-clip-torch==2.20.0
opencv-python==4.8.1.78
openvino==2023.1.0.dev20230811
openvino-telemetry==2023.2.0
orjson==3.9.9
packaging==23.2
pandas==2.1.1
piexif==1.1.3
Pillow==9.5.0
platformdirs==3.11.0
protobuf==3.20.0
psutil==5.9.5
pyasn1==0.5.0
pyasn1-modules==0.3.0
pydantic==1.10.13
pydub==0.25.1
pyparsing==3.1.1
PySocks==1.7.1
python-dateutil==2.8.2
python-multipart==0.0.6
pytorch-lightning==1.9.4
pytz==2023.3.post1
PyWavelets==1.4.1
PyYAML==6.0.1
realesrgan==0.3.0
referencing==0.30.2
regex==2023.10.3
requests==2.31.0
requests-oauthlib==1.3.1
resize-right==0.0.2
rpds-py==0.10.6
rsa==4.9
safetensors==0.3.1
scikit-image==0.21.0
scipy==1.11.3
semantic-version==2.10.0
sentencepiece==0.1.99
six==1.16.0
smmap==5.0.1
sniffio==1.3.0
soupsieve==2.5
starlette==0.26.1
sympy==1.12
tb-nightly==2.15.0a20231016
tensorboard-data-server==0.7.1
tifffile==2023.9.26
timm==0.9.2
tokenizers==0.13.3
tomesd==0.1.3
tomli==2.0.1
toolz==0.12.0
torch==2.0.1+cu118
torchdiffeq==0.2.3
torchmetrics==1.2.0
torchsde==0.2.5
torchvision==0.15.2+cu118
tqdm==4.66.1
trampoline==0.1.2
transformers==4.30.2
triton==2.0.0
typing_extensions==4.8.0
tzdata==2023.3
urllib3==2.0.6
uvicorn==0.23.2
wcwidth==0.2.8
websockets==11.0.3
Werkzeug==3.0.0
yapf==0.40.2
yarl==1.9.2
zipp==3.17.0
qiacheng commented 8 months ago

@pjsg looks like torch and torchvision are not installed correctly and it's using cu118 version. It needs the cpu version of torch could you do sd_env\scritps\python.exe -m pip install torch==2.0.1 and try again?

pjsg commented 8 months ago

@qiacheng Thanks for that advice. I did force install torch==2.0.1 (and I also needed to install torchvision==0.15.2 to get the compatible version) and I now have:

open-clip-torch==2.20.0
pytorch-lightning==1.9.4
torch==2.0.1
torchdiffeq==0.2.3
torchmetrics==1.2.0
torchsde==0.2.5
torchvision==0.15.2

But I still get:

(sd_env) philip@jupiter:~$ ./runui.sh 

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################

################################################################
Running on philip user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
python venv already activate or run without venv: /home/philip/sd_env
################################################################

################################################################
Launching launch.py...
################################################################
Cannot locate TCMalloc (improves CPU memory usage)
fatal: No names found, cannot describe anything.
Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
Version: 1.6.0
Commit hash: 5d2f2d566a59cea66415d5819cf81e3a41d899bb
Installing requirements
Launching Web UI with arguments: --skip-torch-cuda-test --precision full --no-half --listen
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
Loading weights [6ce0161689] from /mnt/philip/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Running on local URL:  http://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 39.3s (prepare environment: 21.2s, import torch: 5.7s, import gradio: 2.0s, setup paths: 2.2s, initialize shared: 0.2s, other imports: 2.0s, setup codeformer: 0.3s, load scripts: 4.3s, create ui: 0.9s, gradio launch: 0.3s).
Creating model from config: /mnt/philip/stable-diffusion-webui/configs/v1-inference.yaml
Applying attention optimization: InvokeAI... done.
Model loaded in 8.9s (load weights from disk: 2.0s, create model: 2.0s, apply weights to model: 4.7s, calculate empty prompt: 0.1s).
[W NNPACK.cpp:64] Could not initialize NNPACK! Reason: Unsupported hardware.
Loading weights [6ce0161689] from /mnt/philip/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
OpenVINO Script:  created model from config : /mnt/philip/stable-diffusion-webui/configs/v1-inference.yaml
/home/philip/sd_env/lib/python3.10/site-packages/transformers/models/clip/feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
  warnings.warn(
  0%|                                                                                                                                                           | 0/20 [00:00<?, ?it/s][2023-10-20 14:25:20,457] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f13a5fe4700> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-20 14:25:20,835] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/conv.py <function Conv2d.forward at 0x7f13a5fe5c60> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-20 14:25:21,080] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f13a5fe4700> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-20 14:25:21,276] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f13a5fe4700> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-20 14:25:21,965] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f13a5fe4700> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-20 14:25:22,147] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f13a5fe4700> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-20 14:25:26,705] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f13a5fe4700> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-20 14:25:26,980] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f13a5fe4700> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-20 14:25:27,060] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f13a5fe4700> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-20 14:25:30,433] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f13a5fe4700> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-20 14:25:30,693] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f13a5fe4700> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-20 14:25:30,890] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f13a5fe4700> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-20 14:25:33,269] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f13a5fe4700> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2023-10-20 14:25:33,575] torch._dynamo.symbolic_convert: [WARNING] /home/philip/sd_env/lib/python3.10/site-packages/torch/nn/modules/linear.py <function Linear.forward at 0x7f13a5fe4700> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments

and the output is still a random blobby image.

Complete pip freeze:

open-clip-torch==2.20.0
pytorch-lightning==1.9.4
torch==2.0.1
torchdiffeq==0.2.3
torchmetrics==1.2.0
torchsde==0.2.5
torchvision==0.15.2
(sd_env) philip@jupiter:~$ pip freeze
absl-py==2.0.0
accelerate==0.21.0
addict==2.4.0
aenum==3.1.15
aiofiles==23.2.1
aiohttp==3.8.6
aiosignal==1.3.1
altair==5.1.2
antlr4-python3-runtime==4.9.3
anyio==3.7.1
async-timeout==4.0.3
attrs==23.1.0
basicsr==1.4.2
beautifulsoup4==4.12.2
blendmodes==2022
boltons==23.0.0
cachetools==5.3.1
certifi==2023.7.22
charset-normalizer==3.3.0
clean-fid==0.1.35
click==8.1.7
clip==1.0
cmake==3.27.7
contourpy==1.1.1
cycler==0.12.1
deprecation==2.1.0
diffusers==0.21.1
einops==0.4.1
exceptiongroup==1.1.3
facexlib==0.3.0
fastapi==0.94.0
ffmpy==0.3.1
filelock==3.12.4
filterpy==1.4.5
fonttools==4.43.1
frozenlist==1.4.0
fsspec==2023.9.2
ftfy==6.1.1
future==0.18.3
gdown==4.7.1
gfpgan==1.3.8
gitdb==4.0.10
GitPython==3.1.34
google-auth==2.23.3
google-auth-oauthlib==1.1.0
gradio==3.41.2
gradio_client==0.5.0
grpcio==1.59.0
h11==0.12.0
httpcore==0.15.0
httpx==0.24.1
huggingface-hub==0.18.0
idna==3.4
imageio==2.31.5
importlib-metadata==6.8.0
importlib-resources==6.1.0
inflection==0.5.1
invisible-watermark==0.2.0
Jinja2==3.1.2
jsonmerge==1.8.0
jsonschema==4.19.1
jsonschema-specifications==2023.7.1
kiwisolver==1.4.5
kornia==0.6.7
lark==1.1.2
lazy_loader==0.3
lightning-utilities==0.9.0
lit==17.0.3
llvmlite==0.41.0
lmdb==1.4.1
lpips==0.1.4
Markdown==3.5
MarkupSafe==2.1.3
matplotlib==3.8.0
mpmath==1.3.0
multidict==6.0.4
networkx==3.2
numba==0.58.0
numpy==1.23.5
nvidia-cublas-cu11==11.10.3.66
nvidia-cuda-cupti-cu11==11.7.101
nvidia-cuda-nvrtc-cu11==11.7.99
nvidia-cuda-runtime-cu11==11.7.99
nvidia-cudnn-cu11==8.5.0.96
nvidia-cufft-cu11==10.9.0.58
nvidia-curand-cu11==10.2.10.91
nvidia-cusolver-cu11==11.4.0.1
nvidia-cusparse-cu11==11.7.4.91
nvidia-nccl-cu11==2.14.3
nvidia-nvtx-cu11==11.7.91
oauthlib==3.2.2
omegaconf==2.2.3
open-clip-torch==2.20.0
opencv-python==4.8.1.78
openvino==2023.1.0.dev20230811
openvino-telemetry==2023.2.0
orjson==3.9.9
packaging==23.2
pandas==2.1.1
piexif==1.1.3
Pillow==9.5.0
platformdirs==3.11.0
protobuf==3.20.0
psutil==5.9.5
pyasn1==0.5.0
pyasn1-modules==0.3.0
pydantic==1.10.13
pydub==0.25.1
pyparsing==3.1.1
PySocks==1.7.1
python-dateutil==2.8.2
python-multipart==0.0.6
pytorch-lightning==1.9.4
pytz==2023.3.post1
PyWavelets==1.4.1
PyYAML==6.0.1
realesrgan==0.3.0
referencing==0.30.2
regex==2023.10.3
requests==2.31.0
requests-oauthlib==1.3.1
resize-right==0.0.2
rpds-py==0.10.6
rsa==4.9
safetensors==0.3.1
scikit-image==0.21.0
scipy==1.11.3
semantic-version==2.10.0
sentencepiece==0.1.99
six==1.16.0
smmap==5.0.1
sniffio==1.3.0
soupsieve==2.5
starlette==0.26.1
sympy==1.12
tb-nightly==2.15.0a20231016
tensorboard-data-server==0.7.1
tifffile==2023.9.26
timm==0.9.2
tokenizers==0.13.3
tomesd==0.1.3
tomli==2.0.1
toolz==0.12.0
torch==2.0.1
torchdiffeq==0.2.3
torchmetrics==1.2.0
torchsde==0.2.5
torchvision==0.15.2
tqdm==4.66.1
trampoline==0.1.2
transformers==4.30.2
triton==2.0.0
typing_extensions==4.8.0
tzdata==2023.3
urllib3==2.0.7
uvicorn==0.23.2
wcwidth==0.2.8
websockets==11.0.3
Werkzeug==3.0.0
yapf==0.40.2
yarl==1.9.2
zipp==3.17.0
pjsg commented 8 months ago

@qiacheng Can you post a complete pip freeze that works?

oscarbg commented 8 months ago

same issue!

pjsg commented 8 months ago

Acutally, just a note to say that my installation works if I don't use openvino and just use the regular CPU.

oscarbg commented 8 months ago

same for me..

J4ckTh3R1pper commented 8 months ago

same for me in latest master, on Gentoo Linux 6.5.5 kernel with A750 graphics, I have to use a earlier commit to make GPU work my sysinfo: sysinfo-2023-10-24-12-07.txt my pip list: pip_list.txt

my advice is go and try commit 10190ac in September, it worked great but sometime it throws bunch of errors while warming up, after that it runs great.

qiacheng commented 8 months ago

@cavusmustafa could you help ? my envs are on windows and this seems like related to older torch packages incompatible with openvino 23.1 on Linux

ynimmaga commented 8 months ago

Hi, below is a configuration that worked for me on Linux:

Torch == 2.0.1 OpenVINO == 2023.1.0

However, it seems like that the workflow breaks with Lora extension enabled. If Lora is not important for you, you can disable the built-in extension temporarily to successfully generate images. This can be done by navigating to 'Extensions' on WebUI and unchecking the 'Lora' extension. The WebUI needs to be restarted once for the change to take effect.

We are working on fixes for getting this work with Torch 2.1.0, that will resolve this issue. Please stay tuned.

qiacheng commented 8 months ago

openvino nightly package with fixes is out. please do the following

\venv\scripts\python -m pip uninstall openvino \venv\scripts\python -m pip install torch==2.1.0 torchvision==0.16.0 openvino-nightly

oscarbg commented 8 months ago

I'm in bad luck using this instructions.. note using (Python 3.11) I forced cpu only: pip3 install torch==2.1.0 torchvision==0.16.0 --index-url https://download.pytorch.org/whl/cpu

now fails building model: pip list: openvino-nightly 2023.2.0.dev20231102 torch 2.1.0+cpu torchvision 0.16.0+cpu

log:

** Error completing request
*** Arguments: ('task(cx47kghlluo8b5s)', 'cat', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x7f4236c469d0>, 1, False, '', 0.8, -1, False, -1, 0, 0, 0, 'None', 'None', 'CPU', True, 'Euler a', True, False, 'None', 0.8, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "/home/osqui/stable/intel/stable-diffusion-webui/modules/call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
                   ^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/stable-diffusion-webui/modules/call_queue.py", line 36, in f
        res = func(*args, **kwargs)
              ^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/stable-diffusion-webui/modules/txt2img.py", line 52, in txt2img
        processed = modules.scripts.scripts_txt2img.run(p, *args)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/stable-diffusion-webui/modules/scripts.py", line 601, in run
        processed = script.run(p, *script_args)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/stable-diffusion-webui/scripts/openvino_accelerate.py", line 1212, in run
        processed = process_images_openvino(p, model_config, vae_ckpt, p.sampler_name, enable_caching, openvino_device, mode, is_xl_ckpt, refiner_ckpt, refiner_frac)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/stable-diffusion-webui/scripts/openvino_accelerate.py", line 963, in process_images_openvino
        output = shared.sd_diffusers_model(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py", line 680, in __call__
        noise_pred = self.unet(
                     ^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 328, in _fn
        return fn(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 490, in catch_errors
        return callback(frame, cache_entry, hooks, frame_state)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 641, in _convert_frame
        result = inner_convert(frame, cache_size, hooks, frame_state)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 133, in _fn
        return fn(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 389, in _convert_frame_assert
        return _compile(
               ^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 569, in _compile
        guarded_code = compile_inner(code, one_graph, hooks, transform)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
        r = func(*args, **kwargs)
            ^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 491, in compile_inner
        out_code = transform_code_object(code, transform)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_dynamo/bytecode_transformation.py", line 1028, in transform_code_object
        transformations(instructions, code_options)
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 458, in transform
        tracer.run()
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2074, in run
        super().run()
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 724, in run
        and self.step()
            ^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 688, in step
        getattr(self, inst.opname)(inst)
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2162, in RETURN_VALUE
        self.output.compile_subgraph(
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 857, in compile_subgraph
        self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
      File "/usr/lib/python3.11/contextlib.py", line 81, in inner
        return func(*args, **kwds)
               ^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 957, in compile_and_call_fx_graph
        compiled_fn = self.call_user_compiler(gm)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
        r = func(*args, **kwargs)
            ^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1024, in call_user_compiler
        raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 1009, in call_user_compiler
        compiled_fn = compiler_fn(gm, self.example_inputs())
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_dynamo/repro/after_dynamo.py", line 117, in debug_wrapper
        compiled_gm = compiler_fn(gm, example_inputs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/__init__.py", line 1607, in __call__
        return self.compiler_fn(model_, inputs_, **self.kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_dynamo/backends/common.py", line 95, in wrapper
        return fn(model, inputs, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/stable-diffusion-webui/scripts/openvino_accelerate.py", line 217, in openvino_fx
        return compile_fx(subgraph, example_inputs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1150, in compile_fx
        return aot_autograd(
               ^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_dynamo/backends/common.py", line 55, in compiler_fn
        cg = aot_module_simplified(gm, example_inputs, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 3891, in aot_module_simplified
        compiled_fn = create_aot_dispatcher_function(
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 189, in time_wrapper
        r = func(*args, **kwargs)
            ^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 3379, in create_aot_dispatcher_function
        fw_metadata = run_functionalized_fw_and_collect_metadata(
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 757, in inner
        flat_f_outs = f(*flat_f_args)
                      ^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 3496, in functional_call
        out = Interpreter(mod).run(*args[params_len:], **kwargs)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/fx/interpreter.py", line 138, in run
        self.env[node] = self.run_node(node)
                         ^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/fx/interpreter.py", line 195, in run_node
        return getattr(self, n.op)(n.target, args, kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/fx/interpreter.py", line 312, in call_module
        return submod(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 444, in network_Conv2d_forward
        return originals.Conv2d_forward(self, input)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 460, in forward
        return self._conv_forward(input, self.weight, self.bias)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
        return F.conv2d(input, weight, bias, self.stride,
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/utils/_stats.py", line 20, in wrapper
        return fn(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1250, in __torch_dispatch__
        return self.dispatch(func, types, args, kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 1487, in dispatch
        op_impl_out = op_impl(self, func, *args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/torch/_subclasses/fake_tensor.py", line 677, in conv
        conv_backend = torch._C._select_conv_backend(**kwargs)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    torch._dynamo.exc.BackendCompilerFailed: backend='openvino_fx' raised:
    RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 2, 77, 768] to have 4 channels, but got 2 channels instead

    While executing %l__self___conv_in : [num_users=3] = call_module[target=L__self___conv_in](args = (%l_sample_,), kwargs = {})
    Original traceback:
      File "/home/osqui/stable/intel/sd_env/lib/python3.11/site-packages/diffusers/models/unet_2d_condition.py", line 934, in forward
        sample = self.conv_in(sample)

    Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information

    You can suppress this exception and fall back to eager by setting:
        import torch._dynamo
        torch._dynamo.config.suppress_errors = True
qiacheng commented 8 months ago

please use python 3.10 as A1111 SD WebUI requires python 3.10, also delete cache folder in stable-diffusion-webui directory if it exists

oscarbg commented 8 months ago

switched to Python 3.10 and was seeing same error.. but deleting the cache folder fixed all and now all works! thanks!! can close now.. wondering if Python 3.11 should have worked also if deleting cache folder..

qiacheng commented 8 months ago

np, this is an ongoing project and there are a lot of things that can be improved. Thanks for being a part of this journey.