Open Porkechebure opened 6 months ago
Torch is not able to use GPU
errorIf you got this after running pip install torch
or pip install --upgrade torch
, then that means you installed the wrong version of torch
. When uninstalling or upgrading torch
, you'll notice that the output contains the following lines:
Found existing installation: torch 2.0.1+<XXX>
Uninstalling torch-2.0.1+<XXX>:
Where <XXX>
is +cu118
or +rocm5.4.2
depending on your computer. This suffix is the key. pip
pulls from the official package index by default, and you'll only get torch-2.0.1
from there. If you dig around the source to see how webui installs torch
, you'll find that the install command appends --index-url
/--extra-index-url
to the pip
command, with the URL being https://download.pytorch.org/whl/XXX
, where XXX
is cu118
or rocm5.4.2
depending on your computer. And how in the world does webui know these URLs?
From the 1st google result of "torch pip install": click the buttons that apply to your computer and it'll generate the install command. So all you need to do now is to append the given --index-url <URL>
part to your pip
command to install/upgrade torch
, and you'll have the correct version once more. Note that all the above also applies to torchvision
.
If you understood all the above, then you can safely upgrade your torch
, torchvision
, and dependent packages to the latest version. For Nvidia GPUs, you can even upgrade them to use the CUDA 12.1 (provided your GPU supports it) instead of the default CUDA 11.8
Erase all lines containing "socket_options" from \venv\lib\site-packages\httpx_transports\default.py will get it to work
Torch is not able to use GPU
errorIf you got this after running
pip install torch
orpip install --upgrade torch
, then that means you installed the wrong version oftorch
. When uninstalling or upgradingtorch
, you'll notice that the output contains the following lines:Found existing installation: torch 2.0.1+<XXX> Uninstalling torch-2.0.1+<XXX>:
Where
<XXX>
is+cu118
or+rocm5.4.2
depending on your computer. This suffix is the key.pip
pulls from the official package index by default, and you'll only gettorch-2.0.1
from there. If you dig around the source to see how webui installstorch
, you'll find that the install command appends--index-url
/--extra-index-url
to thepip
command, with the URL beinghttps://download.pytorch.org/whl/XXX
, whereXXX
iscu118
orrocm5.4.2
depending on your computer. And how in the world does webui know these URLs?Solution
From the 1st google result of "torch pip install": click the buttons that apply to your computer and it'll generate the install command. So all you need to do now is to append the given
--index-url <URL>
part to yourpip
command to install/upgradetorch
, and you'll have the correct version once more. Note that all the above also applies totorchvision
.Bonus
If you understood all the above, then you can safely upgrade your
torch
,torchvision
, and dependent packages to the latest version. For Nvidia GPUs, you can even upgrade them to use the CUDA 12.1 (provided your GPU supports it) instead of the default CUDA 11.8
I'm not a programmer, I'm trying to do the best I can by helping with chatgpt and gemini, but it's really complicated. can you help me?
on aws ec2 g5.xlarge instance with ubuntu (Ubuntu 22.04) also after run
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
i've got
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
Is there an existing issue for this?
What happened?
Tried to run SDXL. Downloaded models First it didn't like sdxl and said it didn't find some params and shit, Got a lot of errors like sd_xl_base_1.0.safetensors Failed to load checkpoint, restoring previous or RuntimeError: Error(s) in loading state_dict for AutoencoderKLInferenceWrapper: Missing key(s) in state_dict: "encoder.conv_in.weight",
original(module, state_dict, strict=strict) File "D:\sdauto\stable-diffusion-webui-master\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 2041, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for AutoencoderKLInferenceWrapper: Missing key(s) in state_dict: "encoder.conv_in.weight",
and stufff. Then also VAE didn't load.
changing setting sd_vae to sdxl_vae.safetensors: AttributeError Traceback (most recent call last): File "D:\sdauto\stable-diffusion-webui-master\stable-diffusion-webui\modules\options.py", line 140, in set option.onchange() File "D:\sdauto\stable-diffusion-webui-master\stable-diffusion-webui\modules\call_queue.py", line 13, in f res = func(*args, **kwargs) File "D:\sdauto\stable-diffusion-webui-master\stable-diffusion-webui\modules\initialize_util.py", line 171, in
shared.opts.onchange("sd_vae", wrap_queued_call(lambda: sd_vae.reload_vae_weights()), call=False)
File "D:\sdauto\stable-diffusion-webui-master\stable-diffusion-webui\modules\sd_vae.py", line 255, in reload_vae_weights
checkpoint_info = sd_model.sd_checkpoint_info
AttributeError: 'NoneType' object has no attribute 'sd_checkpoint_info'
I reinstalled torch 2.0 and now: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
why this cancer is so complicated to run?
Make a fkn install procedure
Also tried to run the """"SPECIAL"""" NVIDIA SETUP.
Result; [notice] To update, run: D:\sdauto\sd.webui\webui\venv\Scripts\python.exe -m pip install --upgrade pip Installing gfpgan Installing clip Installing open_clip Installing requirements for CodeFormer Installing requirements for Web UI Launching Web UI with arguments: Traceback (most recent call last): File "D:\sdauto\sd.webui\webui\launch.py", line 325, in
start()
File "D:\sdauto\sd.webui\webui\launch.py", line 316, in start
import webui
File "D:\sdauto\sd.webui\webui\webui.py", line 16, in
from modules import extra_networks_hypernet, ui_extra_networks_hypernets, ui_extra_networks_textual_inversion
File "D:\sdauto\sd.webui\webui\modules\extra_networks_hypernet.py", line 2, in
from modules.hypernetworks import hypernetwork
File "D:\sdauto\sd.webui\webui\modules\hypernetworks\hypernetwork.py", line 10, in
import modules.textual_inversion.dataset
File "D:\sdauto\sd.webui\webui\modules\textual_inversion\dataset.py", line 13, in
from modules import devices, shared
File "D:\sdauto\sd.webui\webui\modules\shared.py", line 9, in
import gradio as gr
File "D:\sdauto\sd.webui\webui\venv\lib\site-packages\gradio__init.py", line 3, in
import gradio.components as components
File "D:\sdauto\sd.webui\webui\venv\lib\site-packages\gradio\components.py", line 34, in
from gradio import media_data, processing_utils, utils
File "D:\sdauto\sd.webui\webui\venv\lib\site-packages\gradio\processing_utils.py", line 23, in
from gradio import encryptor, utils
File "D:\sdauto\sd.webui\webui\venv\lib\site-packages\gradio\utils.py", line 416, in
class AsyncRequest:
File "D:\sdauto\sd.webui\webui\venv\lib\site-packages\gradio\utils.py", line 436, in AsyncRequest
client = httpx.AsyncClient()
File "D:\sdauto\sd.webui\webui\venv\lib\site-packages\httpx_client.py", line 1397, in init
self._transport = self._init_transport(
File "D:\sdauto\sd.webui\webui\venv\lib\site-packages\httpx_client.py", line 1445, in _init_transport
return AsyncHTTPTransport(
File "D:\sdauto\sd.webui\webui\venv\lib\site-packages\httpx_transports\default.py", line 275, in init__
self._pool = httpcore.AsyncConnectionPool(
TypeError: AsyncConnectionPool.init() got an unexpected keyword argument 'socket_options'
Premere un tasto per continuare . . .
Steps to reproduce the problem
install the cancer
What should have happened?
You know... just work
Sysinfo
win 11 lastest
What browsers do you use to access the UI ?
Google Chrome
Console logs
Additional information
edew