AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
138.33k stars 26.29k forks source link

Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check #1742

Open Giro06 opened 1 year ago

Giro06 commented 1 year ago

when i try to run webui-user.bat this error shown.

venv "C:\Users\giray\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Commit hash: 67d011b02eddc20202b654dfea56528de3d5edf7 Traceback (most recent call last): File "C:\Users\giray\stable-diffusion-webui\launch.py", line 110, in run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'") File "C:\Users\giray\stable-diffusion-webui\launch.py", line 60, in run_python return run(f'"{python}" -c "{code}"', desc, errdesc) File "C:\Users\giray\stable-diffusion-webui\launch.py", line 54, in run raise RuntimeError(message) RuntimeError: Error running command. Command: "C:\Users\giray\stable-diffusion-webui\venv\Scripts\python.exe" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'" Error code: 1 stdout: stderr: Traceback (most recent call last): File "", line 1, in AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

DudeShift commented 1 year ago

In the launch.py line 15, change to commandline_args = os.environ.get('COMMANDLINE_ARGS', "--skip-torch-cuda-test") thus adding the --skip-torch-cuda-test to COMMANDLINE_ARGS as stated in the error message.

I also had to add --precision full --no-half . HOWEVER I am unable to run this on a AMD 5700 XT GPU and it defaults to using CPU only. Seems like a lot of others have this same issue.

lechu1985 commented 1 year ago

For some reason setting the command line arguments in launch.py did not work for me. However setting them in the webui-user.sh script did the trick.

maikelsz commented 1 year ago

or in the file "webui-user.bat", change to "set COMMANDLINE_ARGS = --lowvram --precision full --no-half --skip-torch-cuda-test"

In this other project, if there is no nvidia GPU, the operation is done in the CPU, without the need to specify any initial parameters. It would be nice to see how they do it: https://github.com/cmdr2/stable-diffusion-ui/

zippy-zebu commented 1 year ago

For some reason setting the command line arguments in launch.py did not work for me. However setting them in the webui-user.sh script did the trick.

@lechu1985 How did you do that ? in webui-user.sh there was no such variable.

If I add in launch.py then i get error

launch.py: error: unrecognized arguments: --skip-torch-cuda-test

maniac-0s commented 1 year ago

in webui-user.sh line 8:

# Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention"
export COMMANDLINE_ARGS="--skip-torch-cuda-test"
maikelsz commented 1 year ago

in webui-user.sh line 8:

# Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention"
export COMMANDLINE_ARGS="--skip-torch-cuda-test"

or "webui-user.bat", if you are in Windows. like this: set COMMANDLINE_ARGS= --lowvram --precision full --no-half --skip-torch-cuda-test

atomboy1653 commented 1 year ago

or in the file "webui-user.bat", change to "set COMMANDLINE_ARGS = --lowvram --precision full --no-half --skip-torch-cuda-test"

In this other project, if there is no nvidia GPU, the operation is done in the CPU, without the need to specify any initial parameters. It would be nice to see how they do it: https://github.com/cmdr2/stable-diffusion-ui/

thank you

tpiatan commented 1 year ago

But this still doesn't solve an ensuing issue: that by adding --precision full --no-half, your SD will use the CPU instead of the GPU, which reduces your performance drastically, which defeats the entire purpose.

So the root issue that needs to be addressed is - why is pytorch not detecting the GPU in the first place?

Lan-megumi commented 1 year ago

I had the same problem. I try to solve the problem by google, maybe my graphics card is too old (GTX 950M,roughly equivalent to GTX750) and use CUDA 10.2. I guess is the Torch version doesn't match my CUDA version?

TRIBVTES commented 1 year ago
  1. same problem here ryzen 7 5800x with rx 6800
  2. just trying to install is there a fix yet or is my pc not campatible o.a.?
  3. thx for help

TRIBVTES commented 1 year ago

I had the same problem. I try to solve the problem by google, maybe my graphics card is too old (GTX 950M,roughly equivalent to GTX750) and use CUDA 10.2. I guess is the Torch version doesn't match my CUDA version?

is it only gonna work with nvidia cards not radeon?

y1052895290 commented 1 year ago

same problem here but my set is 12700kf with gtx1080ti, which should be compatible with the default torch version and cuda 11.8 right? Or probably the torch version does not compatible with win 11 and cuda 11?

Rymegu commented 1 year ago

Same problem here, I have Ryzen 5

Lan-megumi commented 1 year ago

I had the same problem. I try to solve the problem by google, maybe my graphics card is too old (GTX 950M,roughly equivalent to GTX750) and use CUDA 10.2. I guess is the Torch version doesn't match my CUDA version?

is it only gonna work with nvidia cards not radeon?

Maybe...

there is another one use Radeon RX 5700 and does not work

https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/2191

HA-JD commented 1 year ago

thank.COMMANDLINE_ARGS=--skip-torch-cuda-test.Very helpful

aphix commented 1 year ago

same issue, last working commit with xformers confirmed working was https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/9d33baba587637815d818e5e641d8f8b74c4900d

To those in this thread, confirm by trying git checkout 9d33baba587637815d818e5e641d8f8b74c4900d then rerun webui-user.bat

Don't use the full precision or low-vram stuff unless you don't want to use your GPU or have reduced memory. 2080 Super 8GB / Windows 10

Update: Try deleting the venv folder and running the webui-user.bat again. That seemed to get it working again for me.

rothej commented 1 year ago

CUDA is an NVIDIA-proprietary software and only works with NVIDIA GPUs. So to everyone who has AMD that is wondering why your GPU isn't recognized..

ajalberd commented 1 year ago

But I thought it would work in Windows even with this ROCM pytorch? Guess I'll have to switch to Linux..

pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.1.1

DudeShift commented 1 year ago

Guess I'll have to switch to Linux.

Can confirm on linux that ROCm pytorch works with AMD GPUs. Dual booted to EndeavourOS (Arch) and Stable Diffusion Native Isekai Too Guide using the arch4edu ROCm pytorch.

Getting 2.95~3 IT/s on a RX 5700 XT.

fesolla commented 1 year ago

or in the file "webui-user.bat", change to "set COMMANDLINE_ARGS = --lowvram --precision full --no-half --skip-torch-cuda-test"

In this other project, if there is no nvidia GPU, the operation is done in the CPU, without the need to specify any initial parameters. It would be nice to see how they do it: https://github.com/cmdr2/stable-diffusion-ui/

This totally worked!Thanks!

MetaphysicsNecrosis commented 1 year ago

if i will comment it will be everything okay? :

if not skip_torch_cuda_test:

#    run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'")
I mean my notebookwill not burn.
skerit commented 1 year ago

Shouldn't this be added to the https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs wiki page at least?

omni002 commented 1 year ago

Could someone explain how to fix this error to me in laymen's terms?

rothej commented 1 year ago

@omni002 CUDA is an NVIDIA-proprietary software for parallel processing of machine learning/deeplearning models that is meant to run on NVIDIA GPUs, and is a dependency for StableDiffision running on GPUs. If you have an AMD GPU, when you start up webui it will test for CUDA and fail, preventing you from running stablediffusion. The workaround adding --skip-torch-cuda-test skips the test, so the cuda startup test will skip and stablediffusion will still run. Because you still can't run CUDA on your AMD GPU, it will default to using the CPU for processing which will take much longer than parallel processing on a GPU would take.

It looks like some people have been able to get their AMD cards to run stablediffusion by using ROCm pytorch on the linux OS, but doesn't appear to work on Windows from what people are commenting in here. I have no idea how to set that up and I am sure it is a pain in the ass, so maybe they can chime in on the specifics. @DudeShift

omni002 commented 1 year ago

Thanks but I meant could someone explain in simple step by step laymen's terms how I add the line.

rothej commented 1 year ago

@omni002 Edit webui-user.bat, where it says: _COMMANDLINEARGS= and change it to: _COMMANDLINEARGS= --lowvram --precision full --no-half --skip-torch-cuda-test

Edit: The above assumes windows, if linux then add the line to webui-user.sh, and use quotes, may also need to delete the /venv folder based on others' comments: _COMMANDLINEARGS="--lowvram --precision full --no-half --skip-torch-cuda-test"

omni002 commented 1 year ago

Thanks that seems to have fixed it.

pypeaday commented 1 year ago

I cannot help with the Radeon folks, but this happens to me when my computer wakes up from sleep/being suspended. I found my issue on this pytorch forum: https://discuss.pytorch.org/t/cuda-fails-to-reinitialize-after-system-suspend/158108

TL;DR

sudo rmmod nvidia_uvm
sudo modprobe nvidia_uvm

This has worked for me with my RTX 3090, CUDA 11.7, and NVIDIA drivers 515.65.01

As others have said - if you do --skip-torch-cuda-test then you'll be running SD on your CPU which defeats the purpose of having the card in the first place

EDIT I recognize now that the original poster is on a Windows machine and I proposed a Linux based solution. I hope it helps others who come here but I should've noticed that sooner!

JonJoeYT commented 1 year ago

im having the same error but i am using nvidia?

JonJoeYT commented 1 year ago

ive managed to use the solution above but i would much prefer to use the GPU if theres a possible solution for me, i am using Nvidia Geforce GTX

pypeaday commented 1 year ago

@JonJoeYT did you try the commands I posted right above your comment? They essentially restart CUDA which should allow your Nvidia card to move along as normal

JonJoeYT commented 1 year ago

@JonJoeYT did you try the commands I posted right above your comment? They essentially restart CUDA which should allow your Nvidia card to move along as normal

ive looked at it, and the link but i dont really understand when the solution is telling me to do?

-open task manager and end python processes and maybe spotify (if i had that) then it should work?

pypeaday commented 1 year ago

So it sounds like you are on Windows (instead of Linux)... If you are then my advice for going forward with anything ML/DL related would be to install Docker Desktop for Windows, and utilize Nvidia's Docker runtime to take advantage of your GPU.

Since this is all built on pytorch you should take a look at Nvidia's NGC containers... There's good documentation here

Your other option is to install a Linux OS (either on a partition or new disk) and pick a distro that makes this stuff easy. I use Pop_OS! 22.04 right now and it comes essentially pre-configured with all the Nvidia stuff. You could install an Arch Linux distro as well as I've seen cuda setup on Arch look as simple as 1 or 2 terminal commands.

JonJoeYT commented 1 year ago

thank you for your help ill see what i can do. :)

JonJoeYT commented 1 year ago

you're right I was on windows I've been looking through that and well, I think i'll have to stick with the long waiting times, I don't really understand what I need to do with it all, downloaded them both, and the link seems to need me to do some coding. even the docker states "failed to start".

Mikolaj7777 commented 1 year ago

same problem here but my set is 12700kf with gtx1080ti, which should be compatible with the default torch version and cuda 11.8 right? Or probably the torch version does not compatible with win 11 and cuda 11?

Did you manage to run it?

AsmKawsar83 commented 1 year ago

or in the file "webui-user.bat", change to "set COMMANDLINE_ARGS = --lowvram --precision full --no-half --skip-torch-cuda-test"

In this other project, if there is no nvidia GPU, the operation is done in the CPU, without the need to specify any initial parameters. It would be nice to see how they do it: https://github.com/cmdr2/stable-diffusion-ui/

Can't believe it just solved the issue.. !!! set COMMANDLINE_ARGS = --lowvram --precision full --no-half --skip-torch-cuda-test

Thanks a lot :)

JonJoeYT commented 1 year ago

i may have accidentally found my problem but dont know, (dont know how to solve)

just in case, heres my problem,

i have Nvidia but stable diffusion wasnt working with it, (managing to use it with the bypass method " --lowvram --precision full --no-half --skip-torch-cuda-test")

here the problem i think it is, my laptop has 2 graphics cards and Nvidia isnt the one my computer uses mainly, Intel is. im currently trying to switch it to use the Nvidia one mainly but not sure, tried and didnt seem to have an effect using the nvidia control panel so far.

JonJoeYT commented 1 year ago

that solved my issue after a restart! :)

so solution for those with Nvidia just in case, search in bottom bar, nvidia control pannel, 3d setting, drop down box - set to nvidia , apply, then restart computer. didnt realise i had 2 graphics cards :)

thanks for the help everyone :D

pypeaday commented 1 year ago

I think it should be pointed out that @JonJoeYT's solution may have worked for him but 1. it would only apply to Windows users specifically and 2. I don't think it addresses the root issue. PyTorch can use any GPU it detects, it doesn't have to use the GPU driving the display so I think there's something else at the heart of it.

Mikolaj7777 commented 1 year ago

The issue might indeed be of not detecting dGPU, but it can also be of new Nvidia drivers, which was my case. Try downgrading to 516.69 as anything above was causing torch not detecting dGPU. (Surface Book 2 1060). Spent days on this issue here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/4622

sampanes commented 1 year ago

Update: Try deleting the venv folder and running the webui-user.bat again. That seemed to get it working again for me.

thank you, this worked for me. my issue was I tried to add dreambooth via the webui and it broke, then I tried to fix it and broke it more. the caveat is that I had sd already working at some point and have the appropriate hardware

KenAir commented 1 year ago

or in the file "webui-user.bat", change to "set COMMANDLINE_ARGS = --lowvram --precision full --no-half --skip-torch-cuda-test"

In this other project, if there is no nvidia GPU, the operation is done in the CPU, without the need to specify any initial parameters. It would be nice to see how they do it: https://github.com/cmdr2/stable-diffusion-ui/

Thank you this is very useful, and I think I should go to buy a Nvidia card haha.

ankdecision commented 1 year ago

I encountered this problem during installation today. I am using rx 590. the only solution is an NVIDIA CPU?

nonsonwune commented 1 year ago

In Launch.py, line 133 commandline_args = os.environ.get('COMMANDLINE_ARGS', "") add the --skip-torch-cuda-test like commandline_args = os.environ.get('COMMANDLINE_ARGS', "--skip-torch-cuda-test"). thats what worked for me

bbecausereasonss commented 1 year ago

I just started getting this error after upgrading my Nvidia drivers to the latest version. I've got a 4080 obviously with CUDA. Very confused why this is happening now :/

NMWave commented 1 year ago

In case this helps someone, I fixed this issue by downgrading my python. You probably have the latest version installed, install Python 3.9 and under stable-diffusion-webui/venv/pyvenv.cfg point SD to the correct version.

3.9.13 works for me

hemangjoshi37a commented 1 year ago

I have one GT 710 card lying in my basement. Please anyone let me know if it supports SD, It has some hundred CUDA cores. https://hjlabs.in

pypeaday commented 1 year ago

@hemangjoshi37a GT710 is end of life back in October 2021. I can't say SD definitely will not work, but assuming it does it will not be that much better than just using a modern CPU at that scale (I think... I don't have data to back that up - but the GT710 is an ancient and very low-spec card)

Tekime commented 1 year ago

SD was previously working on Python 3.9 - upgraded to 3.10.5 and got this error.

Fixed by rebuilding venv. Renamed venv folder to venv.old and restarted webui.bat.

On Windows w/ RTX 3080