lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
8.52k stars 835 forks source link

[Bug]: Cannot install WebUI-Forge: RuntimeError: Torch is not able to use GPU #521

Open Aleh2 opened 8 months ago

Aleh2 commented 8 months ago

Checklist

What happened?

I tried to install WebUI-Forge (intending to create a dual set-up with main WebUI for 1.5 models and WebUI-Forge for XL). To this end, I cloned the repo into a new directory and moved all of my XL checkpoints (and the XL VAE) into the relevant folders. Then I ran webui-user,bat.

I got... well:

Creating venv in directory C:\SD\stable-diffusion-webui-forge\venv using python "C:\Users[Me]\AppData\Local\Programs\Python\Python310\python.exe" venv "C:\SD\stable-diffusion-webui-forge\venv\Scripts\Python.exe" Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Version: f0.0.17v1.8.0rc-latest-276-g29be1da7 Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 Installing torch and torchvision Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121 WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None))': /whl/cu121/torch/ Collecting torch==2.1.2 Using cached https://download.pytorch.org/whl/cu121/torch-2.1.2%2Bcu121-cp310-cp310-win_amd64.whl (2473.9 MB) Collecting torchvision==0.16.2 Using cached https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-win_amd64.whl (5.6 MB) Collecting filelock Using cached filelock-3.13.1-py3-none-any.whl (11 kB) Collecting networkx Using cached https://download.pytorch.org/whl/networkx-3.2.1-py3-none-any.whl (1.6 MB) Collecting sympy Using cached https://download.pytorch.org/whl/sympy-1.12-py3-none-any.whl (5.7 MB) Collecting jinja2 Using cached Jinja2-3.1.3-py3-none-any.whl (133 kB) Collecting typing-extensions Using cached typing_extensions-4.10.0-py3-none-any.whl (33 kB) Collecting fsspec Using cached fsspec-2024.2.0-py3-none-any.whl (170 kB) Collecting numpy Using cached numpy-1.26.4-cp310-cp310-win_amd64.whl (15.8 MB) Collecting pillow!=8.3.*,>=5.3.0 Using cached https://download.pytorch.org/whl/pillow-10.2.0-cp310-cp310-win_amd64.whl (2.6 MB) Collecting requests Using cached requests-2.31.0-py3-none-any.whl (62 kB) Collecting MarkupSafe>=2.0 Using cached MarkupSafe-2.1.5-cp310-cp310-win_amd64.whl (17 kB) Collecting idna<4,>=2.5 Using cached idna-3.6-py3-none-any.whl (61 kB) Collecting charset-normalizer<4,>=2 Using cached charset_normalizer-3.3.2-cp310-cp310-win_amd64.whl (100 kB) Collecting certifi>=2017.4.17 Using cached certifi-2024.2.2-py3-none-any.whl (163 kB) Collecting urllib3<3,>=1.21.1 Using cached urllib3-2.2.1-py3-none-any.whl (121 kB) Collecting mpmath>=0.19 Using cached https://download.pytorch.org/whl/mpmath-1.3.0-py3-none-any.whl (536 kB) Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, fsspec, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision Successfully installed MarkupSafe-2.1.5 certifi-2024.2.2 charset-normalizer-3.3.2 filelock-3.13.1 fsspec-2024.2.0 idna-3.6 jinja2-3.1.3 mpmath-1.3.0 networkx-3.2.1 numpy-1.26.4 pillow-10.2.0 requests-2.31.0 sympy-1.12 torch-2.1.2+cu121 torchvision-0.16.2+cu121 typing-extensions-4.10.0 urllib3-2.2.1

[notice] A new release of pip available: 22.3.1 -> 24.0 [notice] To update, run: C:\SD\stable-diffusion-webui-forge\venv\Scripts\python.exe -m pip install --upgrade pip Traceback (most recent call last): File "C:\SD\stable-diffusion-webui-forge\launch.py", line 51, in main() File "C:\SD\stable-diffusion-webui-forge\launch.py", line 39, in main prepare_environment() File "C:\SD\stable-diffusion-webui-forge\modules\launch_utils.py", line 431, in prepare_environment raise RuntimeError( RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

Attempting to run webui-user.bat again yields the following:

venv "C:\SD\stable-diffusion-webui-forge\venv\Scripts\Python.exe" Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Version: f0.0.17v1.8.0rc-latest-276-g29be1da7 Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 Traceback (most recent call last): File "C:\SD\stable-diffusion-webui-forge\launch.py", line 51, in main() File "C:\SD\stable-diffusion-webui-forge\launch.py", line 39, in main prepare_environment() File "C:\SD\stable-diffusion-webui-forge\modules\launch_utils.py", line 431, in prepare_environment raise RuntimeError( RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

Adding --skip-torch-cuda-test to the commandline args simply results in the file hanging. Attempting to manually set the GPU via device id does nothing to help. Attempting to reinstall Webui-Forge does nothing to help.

I am genuinely puzzled as to what's going on. Again, this is the same system I have main Webui on, and it runs perfectly well.

Steps to reproduce the problem

Clone the repo onto my system.

Run Webui-user.bat.

What should have happened?

Webui-Forge should have run.

What browsers do you use to access the UI ?

Mozilla Firefox

Sysinfo

Attempting to use the --dump-sysinfo commandline argument produces the following:

venv "C:\SD\stable-diffusion-webui-forge\venv\Scripts\Python.exe" Traceback (most recent call last): File "C:\SD\stable-diffusion-webui-forge\launch.py", line 51, in main() File "C:\SD\stable-diffusion-webui-forge\launch.py", line 29, in main filename = launch_utils.dump_sysinfo() File "C:\SD\stable-diffusion-webui-forge\modules\launch_utils.py", line 554, in dump_sysinfo from modules import sysinfo File "C:\SD\stable-diffusion-webui-forge\modules\sysinfo.py", line 8, in import psutil ModuleNotFoundError: No module named 'psutil'

Console logs

On the initial/install runs (I tried twice):
> Creating venv in directory C:\SD\stable-diffusion-webui-forge\venv using python "C:\Users\[Me]\AppData\Local\Programs\Python\Python310\python.exe"
> venv "C:\SD\stable-diffusion-webui-forge\venv\Scripts\Python.exe"
> Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
> Version: f0.0.17v1.8.0rc-latest-276-g29be1da7
> Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
> Installing torch and torchvision
> Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
> WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None))': /whl/cu121/torch/
> Collecting torch==2.1.2
>   Using cached https://download.pytorch.org/whl/cu121/torch-2.1.2%2Bcu121-cp310-cp310-win_amd64.whl (2473.9 MB)
> Collecting torchvision==0.16.2
>   Using cached https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-win_amd64.whl (5.6 MB)
> Collecting filelock
>   Using cached filelock-3.13.1-py3-none-any.whl (11 kB)
> Collecting networkx
>   Using cached https://download.pytorch.org/whl/networkx-3.2.1-py3-none-any.whl (1.6 MB)
> Collecting sympy
>   Using cached https://download.pytorch.org/whl/sympy-1.12-py3-none-any.whl (5.7 MB)
> Collecting jinja2
>   Using cached Jinja2-3.1.3-py3-none-any.whl (133 kB)
> Collecting typing-extensions
>   Using cached typing_extensions-4.10.0-py3-none-any.whl (33 kB)
> Collecting fsspec
>   Using cached fsspec-2024.2.0-py3-none-any.whl (170 kB)
> Collecting numpy
>   Using cached numpy-1.26.4-cp310-cp310-win_amd64.whl (15.8 MB)
> Collecting pillow!=8.3.*,>=5.3.0
>   Using cached https://download.pytorch.org/whl/pillow-10.2.0-cp310-cp310-win_amd64.whl (2.6 MB)
> Collecting requests
>   Using cached requests-2.31.0-py3-none-any.whl (62 kB)
> Collecting MarkupSafe>=2.0
>   Using cached MarkupSafe-2.1.5-cp310-cp310-win_amd64.whl (17 kB)
> Collecting idna<4,>=2.5
>   Using cached idna-3.6-py3-none-any.whl (61 kB)
> Collecting charset-normalizer<4,>=2
>   Using cached charset_normalizer-3.3.2-cp310-cp310-win_amd64.whl (100 kB)
> Collecting certifi>=2017.4.17
>   Using cached certifi-2024.2.2-py3-none-any.whl (163 kB)
> Collecting urllib3<3,>=1.21.1
>   Using cached urllib3-2.2.1-py3-none-any.whl (121 kB)
> Collecting mpmath>=0.19
>   Using cached https://download.pytorch.org/whl/mpmath-1.3.0-py3-none-any.whl (536 kB)
> Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, fsspec, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision
> Successfully installed MarkupSafe-2.1.5 certifi-2024.2.2 charset-normalizer-3.3.2 filelock-3.13.1 fsspec-2024.2.0 idna-3.6 jinja2-3.1.3 mpmath-1.3.0 networkx-3.2.1 numpy-1.26.4 pillow-10.2.0 requests-2.31.0 sympy-1.12 torch-2.1.2+cu121 torchvision-0.16.2+cu121 typing-extensions-4.10.0 urllib3-2.2.1
>  
> [notice] A new release of pip available: 22.3.1 -> 24.0
> [notice] To update, run: C:\SD\stable-diffusion-webui-forge\venv\Scripts\python.exe -m pip install --upgrade pip
> Traceback (most recent call last):
>   File "C:\SD\stable-diffusion-webui-forge\launch.py", line 51, in <module>
>   main()
>   File "C:\SD\stable-diffusion-webui-forge\launch.py", line 39, in main
>   prepare_environment()
>   File "C:\SD\stable-diffusion-webui-forge\modules\launch_utils.py", line 431, in prepare_environment
>   raise RuntimeError(
> RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

On subsequent runs:

> venv "C:\SD\stable-diffusion-webui-forge\venv\Scripts\Python.exe"
> Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
> Version: f0.0.17v1.8.0rc-latest-276-g29be1da7
> Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7
> Traceback (most recent call last):
>   File "C:\SD\stable-diffusion-webui-forge\launch.py", line 51, in <module>
>     main()
>   File "C:\SD\stable-diffusion-webui-forge\launch.py", line 39, in main
>     prepare_environment()
>   File "C:\SD\stable-diffusion-webui-forge\modules\launch_utils.py", line 431, in prepare_environment
>     raise RuntimeError(
> RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

Additional information

No response

Aleh2 commented 8 months ago

Attempted to install again (in a new/third directory) via the zip. Ran update.bat. Ran run.bat. Got roughly the same error message:

Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: f0.0.17v1.8.0rc-latest-276-g29be1da7 Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 Traceback (most recent call last): File "C:\SD\webui_forge_cu121_torch21\webui\launch.py", line 51, in main() File "C:\SD\webui_forge_cu121_torch21\webui\launch.py", line 39, in main prepare_environment() File "C:\SD\webui_forge_cu121_torch21\webui\modules\launch_utils.py", line 431, in prepare_environment raise RuntimeError( RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

DA-Charlie commented 8 months ago

C:\SD\stable-diffusion-webui-forge\venv\Scripts\python.exe -m pip list to check the torch version

when you scroll in the list you should have something like torch 2.1.2+cu121

Aleh2 commented 8 months ago

torch is listed as 2.2.1+cu121 in said list.

DA-Charlie commented 8 months ago

that's strange.. you have cuda installed.. it should work

in lauch_utils there is this part:

if not args.skip_torch_cuda_test and not check_run_python("import torch; assert torch.cuda.is_available()"): raise RuntimeError( 'Torch is not able to use GPU; ' 'add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check' )

try to check yourself

C:\SD\stable-diffusion-webui-forge\venv\Scripts\python.exe import torch print(torch.cuda.is_available())

DA-Charlie commented 8 months ago

make sure no env is activated otherwise the python installation triggered will be the one of that env and not this one C:\SD\stable-diffusion-webui-forge\venv\Scripts\python.exe

an example: with env activated

(base) PS C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\system\python

without:

PS C:\Users\Charl\Desktop\python_charlie\A1111\webui_forge_cu121_torch21\system\python

DA-Charlie commented 8 months ago

just noticed our installations structures are different.. i used the one-click method.. is it possible that each method use a different struture ?? 🤔🤔

Aleh2 commented 8 months ago

I tried the one-click method as well. Also seem to have found the issue.

PS C:\SD\stable-diffusion-webui-forge\venv\Scripts> python.exe Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information.

import torch print(torch.cuda.is.available()) File "", line 1 print(torch.cuda.is.available()) ^^ SyntaxError: invalid syntax print(torch.cuda.is_available()) C:\Users[me]\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\cuda__init__.py:141: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 11070). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. (Triggered internally at ..\c10\cuda\CUDAFunctions.cpp:108.) return torch._C._cuda_getDeviceCount() > 0 False

I find it interesting that it's trying to call CUDA from my general environment rather than the local one, but this certainly explains why Forge won't load. Also shows me that I need to manually update my drivers ('cause my updating software clearly isn't doing it for me).

DA-Charlie commented 8 months ago

let's see if the update will solve it

DA-Charlie commented 8 months ago

by the way you can do venv\Scripts\activate to activate this environment to check torch.cuda.is_available() too

Aleh2 commented 8 months ago

Just went to NVidia's site. Updated my drivers. It immediately started installing requirements... so it looks like it might have just been a driver issue manifesting in a weird way.

venv "C:\SD\stable-diffusion-webui-forge\venv\Scripts\Python.exe" Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Version: f0.0.17v1.8.0rc-latest-276-g29be1da7 Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 Installing clip Installing open_clip Cloning assets into C:\SD\stable-diffusion-webui-forge\repositories\stable-diffusion-webui-assets... Cloning into 'C:\SD\stable-diffusion-webui-forge\repositories\stable-diffusion-webui-assets'... remote: Enumerating objects: 20, done. remote: Counting objects: 100% (20/20), done. remote: Compressing objects: 100% (18/18), done. Receiving objects: 100% (20/20), 132.70 KiB | 1.52 MiB/s, done

Cloning Stable Diffusion into C:\SD\stable-diffusion-webui-forge\repositories\stable-diffusion-stability-ai... Cloning into 'C:\SD\stable-diffusion-webui-forge\repositories\stable-diffusion-stability-ai'... remote: Enumerating objects: 580, done. remote: Counting objects: 100% (357/357), done. remote: Compressing objects: 100% (128/128), done. Receiving objects: 32% (186/580), 50.45 MiB | 33.56 MiB/sremote: Total 580 (delta 260), reused 229 (delta 229), pack-reReceivin (580/580), 73.44 MiB | 33.57 MiB/s, done. Resolving deltas: 0% (0/279) Resolving deltas: 100% (279/279), done. Cloning Stable Diffusion XL into C:\SD\stable-diffusion-webui-forge\repositories\generative-models... Cloning into 'C:\SD\stable-diffusion-webui-forge\repositories\generative-models'... remote: Enumerating objects: 882, done. remote: Counting objects: 100% (511/511), done. remote: Compressing objects: 100% (234/234), done. remote: Total 882 (delta 382), reused 283 (delta 276), pack-reused 371 Receiving objects: 100% (882/882), 42.68 MiB | 25.35 MiB/s, done. Resolving deltas: 100% (459/459), done. Cloning K-diffusion into C:\SD\stable-diffusion-webui-forge\repositories\k-diffusion... Cloning into 'C:\SD\stable-diffusion-webui-forge\repositories\k-diffusion'... remote: Enumerating objects: 1340, done. remote: Counting objects: 100% (622/622), done. remote: Compressing objects: 100% (86/86), done. remote: Total 1340 (delta 576), reused 547 (delta 536), pack-reused 718 Receiving objects: 100% (1340/1340), 242.04 KiB | 3.36 MiB/s, done. Resolving deltas: 100% (939/939), done. Cloning BLIP into C:\SD\stable-diffusion-webui-forge\repositories\BLIP... Cloning into 'C:\SD\stable-diffusion-webui-forge\repositories\BLIP'... remote: Enumerating objects: 277, done. remote: Counting objects: 100% (165/165), done. remote: Compressing objects: 100% (30/30), done. remote: Total 277 (delta 137), reused 136 (delta 135), pack-reused 112Receiving objects: 99% (275/277), 6.88 MiB | 13.4

Resolving deltas: 100% (152/152), done. Installing requirements Installing forge_legacy_preprocessor requirement: fvcore Installing forge_legacy_preprocessor requirement: mediapipe Installing forge_legacy_preprocessor requirement: onnxruntime Installing forge_legacy_preprocessor requirement: svglib Installing forge_legacy_preprocessor requirement: insightface Installing forge_legacy_preprocessor requirement: handrefinerportable Installing forge_legacy_preprocessor requirement: depth_anything Launching Web UI with arguments: Total VRAM 16384 MB, total RAM 32451 MB Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 3080 Ti Laptop GPU : native Hint: your device supports --pin-shared-memory for potential speed improvements. Hint: your device supports --cuda-malloc for potential speed improvements. Hint: your device supports --cuda-stream for potential speed improvements. VAE dtype: torch.bfloat16 CUDA Stream Activated: False Using pytorch cross attention Downloading: "https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/realisticVisionV51_v51VAE.safetensors" to C:\SD\stable-diffusion-webui-forge\models\Stable-diffusion\realisticVisionV51_v51VAE.safetensors

100%|█████████████████████████████████████████████████████████████████████████████| 1.99G/1.99G [00:53<00:00, 39.6MB/s] ControlNet preprocessor location: C:\SD\stable-diffusion-webui-forge\models\ControlNetPreprocessor Calculating sha256 for C:\SD\stable-diffusion-webui-forge\models\Stable-diffusion\realisticVisionV51_v51VAE.safetensors: 2024-03-10 08:21:41,759 - ControlNet - INFO - ControlNet UI callback registered. Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Startup time: 353.9s (prepare environment: 282.2s, import torch: 7.0s, import gradio: 1.7s, setup paths: 2.4s, initialize shared: 0.4s, other imports: 2.4s, list SD models: 54.2s, load scripts: 2.4s, create ui: 0.6s, gradio launch: 0.3s). 15012c538f503ce2ebfc2c8547b268c75ccdaff7a281db55399940ff1d70e21d Loading weights [15012c538f] from C:\SD\stable-diffusion-webui-forge\models\Stable-diffusion\realisticVisionV51_v51VAE.safetensors model_type EPS UNet ADM Dimension 0 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'} To load target model SD1ClipModel Begin to load 1 model [Memory Management] Current Free GPU Memory (MB) = 15222.9990234375 [Memory Management] Model Memory (MB) = 454.2076225280762 [Memory Management] Minimal Inference Memory (MB) = 1024.0 [Memory Management] Estimated Remaining GPU Memory (MB) = 13744.791400909424 Moving model(s) has taken 0.30 seconds Model loaded in 7.3s (calculate hash: 3.6s, forge load real models: 3.1s, calculate empty prompt: 0.6s).

Wondering why RealisticVision, though. Moved my models back into place, and...

venv "C:\SD\stable-diffusion-webui-forge\venv\Scripts\Python.exe" Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Version: f0.0.17v1.8.0rc-latest-276-g29be1da7 Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 Launching Web UI with arguments: Total VRAM 16384 MB, total RAM 32451 MB Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 3080 Ti Laptop GPU : native Hint: your device supports --pin-shared-memory for potential speed improvements. Hint: your device supports --cuda-malloc for potential speed improvements. Hint: your device supports --cuda-stream for potential speed improvements. VAE dtype: torch.bfloat16 CUDA Stream Activated: False Using pytorch cross attention ControlNet preprocessor location: C:\SD\stable-diffusion-webui-forge\models\ControlNetPreprocessor Calculating sha256 for C:\SD\stable-diffusion-webui-forge\models\Stable-diffusion\albedobaseXL_v13.safetensors: 2024-03-10 08:26:07,106 - ControlNet - INFO - ControlNet UI callback registered. Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Startup time: 14.9s (prepare environment: 4.3s, import torch: 4.2s, import gradio: 0.8s, setup paths: 0.8s, initialize shared: 0.1s, other imports: 0.6s, list SD models: 2.0s, load scripts: 1.4s, create ui: 0.5s, gradio launch: 0.2s).

So, yeah. Looks like it was (mostly) just a driver issue. Odd.

DA-Charlie commented 8 months ago

ah great.. yeah it download RVision during first installation.. just take time for nothing 😵