[X] I have searched the existing issues and checked the recent builds/commits
What happened?
after click "generate", the progress keep 0%;
I went to task manager, finding that "python.exe" had about 30% CPU usage, and a large amount of RAM was used. But I wait for about an hour and generation kept 0%.
venv "D:\AI\sd-webui\openVINO\stable-diffusion-webui-ov-test\venv\Scripts\Python.exe"
Launching Web UI with arguments: --skip-torch-cuda-test --precision full --no-half --skip-prepare-environment
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
Loading weights [39d6af08b2] from D:\AI\sd-webui\openVINO\stable-diffusion-webui-ov-test\models\Stable-diffusion\Qtea\qteamixQ_omegaFp16.safetensors
fatal: No names found, cannot describe anything.
Creating model from config: D:\AI\sd-webui\openVINO\stable-diffusion-webui-ov-test\configs\v1-inference.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 18.6s (import torch: 7.6s, import gradio: 1.7s, setup paths: 2.0s, initialize shared: 0.3s, other imports: 1.8s, setup codeformer: 0.2s, list SD models: 0.3s, load scripts: 2.6s, create ui: 1.8s, gradio launch: 0.3s).
Applying attention optimization: InvokeAI... done.
Model loaded in 15.3s (load weights from disk: 2.0s, create model: 0.5s, apply weights to model: 11.8s, apply float(): 0.7s, calculate empty prompt: 0.2s).
{}
Loading weights [39d6af08b2] from D:\AI\sd-webui\openVINO\stable-diffusion-webui-ov-test\models\Stable-diffusion\Qtea\qteamixQ_omegaFp16.safetensors
OpenVINO Script: created model from config : D:\AI\sd-webui\openVINO\stable-diffusion-webui-ov-test\configs\v1-inference.yaml
D:\AI\sd-webui\openVINO\stable-diffusion-webui-ov-test\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
warnings.warn(
0%| | 0/20 [00:00<?, ?it/s]
Additional information
I had edited "torch-install.bat", change the version of openvino-nightly to the newest(2023.3.0.dev20231115);
after failed I thought it's because this fork is incompatible with the newest version at first;
but after changing to the develop branch, and still update openvino-nightly to the newest(by copy the torch-install.bat from master branch to develop directly), and then tried again. Surprisingly it works. Sysinfo and console logs this time are here:
venv "D:\AI\sd-webui\openVINO\stable-diffusion-webui-ov-dev\venv\Scripts\Python.exe"
Launching Web UI with arguments: --skip-torch-cuda-test --precision full --no-half --skip-prepare-environment
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
Loading weights [39d6af08b2] from D:\AI\sd-webui\openVINO\stable-diffusion-webui-ov-dev\models\Stable-diffusion\Qtea\qteamixQ_omegaFp16.safetensors
Creating model from config: D:\AI\sd-webui\openVINO\stable-diffusion-webui-ov-dev\configs\v1-inference.yaml
fatal: No names found, cannot describe anything.
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True in launch().
Startup time: 18.5s (import torch: 7.5s, import gradio: 1.8s, setup paths: 1.6s, initialize shared: 0.3s, other imports: 1.7s, setup codeformer: 0.2s, list SD models: 0.3s, load scripts: 2.9s, create ui: 1.6s, gradio launch: 0.2s).
Applying attention optimization: InvokeAI... done.
Model loaded in 13.3s (load weights from disk: 1.6s, create model: 0.6s, apply weights to model: 10.4s, apply float(): 0.5s, calculate empty prompt: 0.2s).
{}
Loading weights [39d6af08b2] from D:\AI\sd-webui\openVINO\stable-diffusion-webui-ov-dev\models\Stable-diffusion\Qtea\qteamixQ_omegaFp16.safetensors
OpenVINO Script: created model from config : D:\AI\sd-webui\openVINO\stable-diffusion-webui-ov-dev\configs\v1-inference.yaml
'(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14/resolve/main/config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x0000023A3BFA90F0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: f3d8bc39-790d-49a2-a827-eb9e325a8aae)')' thrown while requesting HEAD https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/config.json
'(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14/resolve/main/vocab.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x0000023A3BFAB5E0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: f304bdc3-e9b1-47b3-91ba-c79feaec7cee)')' thrown while requesting HEAD https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/vocab.json
'(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /CompVis/stable-diffusion-safety-checker/resolve/main/config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x0000023A3BFA95D0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: c03022f9-b7a0-4c49-84e9-44fe8b57736f)')' thrown while requesting HEAD https://huggingface.co/CompVis/stable-diffusion-safety-checker/resolve/main/config.json
'(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /CompVis/stable-diffusion-safety-checker/resolve/main/preprocessor_config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x0000023A3C83A920>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: e4a6ad3a-4dd5-4825-916f-610b3cf43d9c)')' thrown while requesting HEAD https://huggingface.co/CompVis/stable-diffusion-safety-checker/resolve/main/preprocessor_config.json
D:\AI\sd-webui\openVINO\stable-diffusion-webui-ov-dev\venv\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
warnings.warn(
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [08:05<00:00, 24.30s/it]
{}
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [02:30<00:00, 7.54s/it]
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [02:57<00:00, 8.87s/it]
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [03:28<00:00, 10.42s/it]
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [02:25<00:00, 7.29s/it]
Is there an existing issue for this?
What happened?
after click "generate", the progress keep 0%; I went to task manager, finding that "python.exe" had about 30% CPU usage, and a large amount of RAM was used. But I wait for about an hour and generation kept 0%.
Steps to reproduce the problem
What should have happened?
generation should begin correctly.
Sysinfo
sysinfo-2023-11-16-09-52.txt
What browsers do you use to access the UI ?
Microsoft Edge
Console logs
Additional information
I had edited "torch-install.bat", change the version of openvino-nightly to the newest(2023.3.0.dev20231115); after failed I thought it's because this fork is incompatible with the newest version at first; but after changing to the develop branch, and still update openvino-nightly to the newest(by copy the torch-install.bat from master branch to develop directly), and then tried again. Surprisingly it works. Sysinfo and console logs this time are here:
sysinfo: sysinfo-2023-11-16-09-36.txt
console logs: