AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
139.45k stars 26.44k forks source link

Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check #1742

Open Giro06 opened 1 year ago

Giro06 commented 1 year ago

when i try to run webui-user.bat this error shown.

venv "C:\Users\giray\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Commit hash: 67d011b02eddc20202b654dfea56528de3d5edf7 Traceback (most recent call last): File "C:\Users\giray\stable-diffusion-webui\launch.py", line 110, in run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'") File "C:\Users\giray\stable-diffusion-webui\launch.py", line 60, in run_python return run(f'"{python}" -c "{code}"', desc, errdesc) File "C:\Users\giray\stable-diffusion-webui\launch.py", line 54, in run raise RuntimeError(message) RuntimeError: Error running command. Command: "C:\Users\giray\stable-diffusion-webui\venv\Scripts\python.exe" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'" Error code: 1 stdout: stderr: Traceback (most recent call last): File "", line 1, in AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

littespace1 commented 1 year ago

This stuff is a mess. I'm using CUDA 10.2 (driver version 442.80) on a MSI GS66 Stealth 10 SF (CPU i7-10750H) and it doesn't recognize that I have CUDA--I have updated the drivers, manually downloaded and installed the CUDA toolkit. Run the code suggested here and I still get error messages. Anyone have an idea where to go from here?

Screenshot Error Message

pypeaday commented 1 year ago

@littespace1 - When pip throws an error like Could not find a version that satisfies the requirement <package details> (from versions: none) it indicates that the package is not supported on whatever version of Python you are using. I don't see what version of python you are using here but this is a pip issue, not CUDA.

I suggest building an environment with, or making the default python interpreter that the repo's webuiX script will use, python 3.9.x as it is very much supported, a tiny bit old but not EOL and very stable.

edit: I use python 3.10.6 without issue but I see that some people do have 3.10 woes for some reason

edit: Also looks like you passed --skip-torch-cuda-test to the pip install command - which I'm not sure how you did, but that will also cause pip to throw errors (which it did at the top of your screenshot)

littespace1 commented 1 year ago

@littespace1 - When pip throws an error like Could not find a version that satisfies the requirement <package details> (from versions: none) it indicates that the package is not supported on whatever version of Python you are using. I don't see what version of python you are using here but this is a pip issue, not CUDA.

I suggest building an environment with, or making the default python interpreter that the repo's webuiX script will use, python 3.9.x as it is very much supported, a tiny bit old but not EOL and very stable.

edit: I use python 3.10.6 without issue but I see that some people do have 3.10 woes for some reason

edit: Also looks like you passed --skip-torch-cuda-test to the pip install command - which I'm not sure how you did, but that will also cause pip to throw errors (which it did at the top of your screenshot)

As you note, I'm using 3.10, as recommended by some tutorial I watched, but I'll try 3.9.x. Also, I only entered --skip-torch-cuda-test (in the webui-user batch file, beside set COMMANDLINE_ARGS=) because when I don't enter it, I get an entirely different error:

AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

pypeaday commented 1 year ago

So just to clarify that --skip-torch-cuda-test will mean that SD will only run on your CPU, which will be slow... very slow compared to the 2070 in your laptop (if I read specs online right). That error about Torch not finding your GPU is the problem you really want to solve I think. You've mentioned CUDA 10.2 and that you have the latest drivers. Can you check the torch requirement in the version of this repo you have checked out? I think it's possible that the requirement in requirements.txt of the pytorch library might require a different version of CUDA but you'd have to probably check that against PyTorch docs

alpacaccp commented 1 year ago

In my case, it seemed to be Windows 11 problem.

One of my 3060 Ti worked OK for few weeks. When I set up a new Windows 11 with another 3060 Ti, the Torch not happy, tried different drivers same problem. So wiped SSD and installed Windows 10.

In Windows 10, the Torch still not happy, but I noticed it was complaining the VC_redist.x64.exe. After installing VC_redist.x64.exe, everything is good.

I can't remember whether VC_redist.x64.exe error was in Windows 11, but probably not. I don't want to wipe the drive again and leave to someone to try.

dannYv3s commented 1 year ago

Did anyone find a solution for this?! Generating Stable Diffusion online is beyond slow for me :-( I inputted: COMMANDLINE_ARGS = --lowvram --precision full --no-half --skip-torch-cuda-test I have an XFX Radeon RX 470 graphics card, if that helps...

dannYv3s commented 1 year ago

So, I did a bit of research on the issue above ^ i.e., running Stable Diffusion with an AMD graphics card and I came across this reddit discussion: https://www.reddit.com/r/Amd/comments/yhh6qi/all_of_a_sudden_i_care_about_compute_when_will/ Apparently, those of us with an AMD graphics card are screwed because AMD uses a different script for CUDA hardware. I found this reddit comment interesting and thought I'd put it out there because if there's anyone like me who built their computers based upon 'gaming specifications/prerequisites' yet also use their computers for graphics design and AI work, it's important to keep this in mind:

..."Right now Nvidia handily beats AMD outside of the gaming space from top to bottom, and if that's the market you're in, it makes no sense to buy anything but Nvidia"...

sigh hopefully they'll be a work around soon, otherwise those of us with AMD cards have no choice but to use the CPU with Stable Diffusion (which takes ages to process by the way).

EDIT: At the time I built my computer, AI rendering wasn't a thing, so I had absolutely NO IDEA that I should have gone with a different graphics card in the long run... Just something to note for anyone in the same boat as me.

daniandtheweb commented 1 year ago

I have an AMD rx5700xt and with the right configuration the program runs flawlessly (gpu training still isn't completely supported but that's because it requires xformers and that's CUDA only). The wiki should be updated since the info I gathered was spread all over the repo inside other issues.

In the launch.py the torch_command variable at line 161 needs to be edited to this: torch_command = os.environ.get(TORCH_COMMAND="pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.2/") (this allows the script to download torch and torchvision compatible for AMD cards).

After that I created a small bash script (this will only work on linux):

!/bin/bash

source venv/bin/activate

HSA_OVERRIDE_GFX_VERSION=10.3.0 python3 launch.py --precision full --no-half --listen --medvram --disable-safe-unpickle --enable-insecure-extension-access

I saved this script as executor.sh inside the main folder. I hope that this can help some people.

dannYv3s commented 1 year ago

In the launch.py the torch_command variable at line 161 needs to be edited to this: torch_command = os.environ.get(TORCH_COMMAND="pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.2/") (this allows the script to download torch and torchvision compatible for AMD cards).

HSA_OVERRIDE_GFX_VERSION=10.3.0 python3 launch.py --precision full --no-half --listen --medvram --disable-safe-unpickle --enable-insecure-extension-access

Thank you for responding and I hope this helps! But, I apologize that I don't understand, I am not a coder so I have absolutely no idea how to achieve this unfortunately, without the aforementioned instructions being told in laymen terms.

I also read, previously, that you mentioned parts of this only working in Linux, I have Windows 10.

daniandtheweb commented 1 year ago

For windows 10 this script won't work, it has to be written in batch and the torch and torchvision part won't work either since they aren't compatible with windows. I did a fast search and microsoft released something called pytorch-directml that could replace the default torch but i never tried using it. I have a second partition with windows on my pc so later I'll try this torch version to see if it works

dannYv3s commented 1 year ago

For windows 10 this script won't work, it has to be written in batch and the torch and torchvision part won't work either since they aren't compatible with windows. I did a fast search and microsoft released something called pytorch-directml that could replace the default torch but i never tried using it. I have a second partition with windows on my pc so later I'll try this torch version to see if it works

I went onto Reddit trying to find some answers for this because it is an issue for MANY PEOPLE, including myself, and someone recommended me to use: Stable Diffusion Optimized for AMD RDNA2/RDNA3 GPUs"

https://github.com/nod-ai/SHARK/blob/main/shark/examples/shark_inference/stable_diffusion/stable_diffusion_amd.md

https://nod.ai/shark-rdna3-sd/

Here's a reddit thread talking about those who have used the app/engine: https://www.reddit.com/r/Amd/comments/zkvkbh/stable_diffusion_optimized_for_amd_rdna2rdna3_gpus/

And in discord I was told about 'colabs' being used by a lot of people, yet I do not have these links so I need to do some more research on this.

hendsuuu commented 1 year ago

Installing gfpgan Installing clip Installing open_clip Cloning Stable Diffusion into repositories\stable-diffusion-stability-ai... Cloning Taming Transformers into repositories\taming-transformers... Cloning K-diffusion into repositories\k-diffusion... Cloning CodeFormer into repositories\CodeFormer... Traceback (most recent call last): File "D:\ai2\stable-diffusion-webui\launch.py", line 294, in prepare_environment() File "D:\ai2\stable-diffusion-webui\launch.py", line 240, in prepare_environment git_clone(codeformer_repo, repo_dir('CodeFormer'), "CodeFormer", codeformer_commit_hash) File "D:\ai2\stable-diffusion-webui\launch.py", line 100, in git_clone run(f'"{git}" clone "{url}" "{dir}"', f"Cloning {name} into {dir}...", f"Couldn't clone {name}") File "D:\ai2\stable-diffusion-webui\launch.py", line 49, in run raise RuntimeError(message) RuntimeError: Couldn't clone CodeFormer. Command: "git" clone "https://github.com/sczhou/CodeFormer.git" "repositories\CodeFormer" Error code: 128 stdout: stderr: Cloning into 'repositories\CodeFormer'... error: 4576 bytes of body are still expected fetch-pack: unexpected disconnect while reading sideband packet fatal: early EOF fatal: fetch-pack: invalid index-pack output

i found this error what should i do?

winnie334 commented 1 year ago

Updating my Nvidia driver for my 1050ti solved this problem for me.

sinanisler commented 1 year ago

script was working fine before formated the win 10 literally followed same install path and now this problem happened :)

my question is why my first install worked fine but this one didn't ? 🤣🤣

IlhamMHamdi commented 1 year ago

--precision full --no-half

where do i put this line? im sorry im noob at coding

sinanisler commented 1 year ago

launch.py

change this commandline_args = os.environ.get('COMMANDLINE_ARGS', "")

with this commandline_args = os.environ.get('COMMANDLINE_ARGS', "--skip-torch-cuda-test")

zazalael commented 1 year ago

greetings, I hope you can understand me since I am using a translator, what worked for me, I found it by chance, while trying to install xformer without success, I had uploaded python to 3.10 and it started to give me this error, I put it in 3.8.15 and it worked again without problems, and without the arguments, I hope this is helpful, greetings

millennialboomer commented 1 year ago

I have an AMD rx5700xt and with the right configuration the program runs flawlessly (gpu training still isn't completely supported but that's because it requires xformers and that's CUDA only). The wiki should be updated since the info I gathered was spread all over the repo inside other issues.

In the launch.py the torch_command variable at line 161 needs to be edited to this: torch_command = os.environ.get(TORCH_COMMAND="pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.2/") (this allows the script to download torch and torchvision compatible for AMD cards).

After that I created a small bash script (this will only work on linux):

!/bin/bash

source venv/bin/activate

HSA_OVERRIDE_GFX_VERSION=10.3.0 python3 launch.py --precision full --no-half --listen --medvram --disable-safe-unpickle --enable-insecure-extension-access

I saved this script as executor.sh inside the main folder. I hope that this can help some people.

I had to still issue the --skip-torch-cuda-test at the end of the executor script. Is this defeating the purpose of editing the launch.py. I am just wondering if it will still be using my CPU instead of GPU.

paulxinzhou commented 1 year ago

One of the problems I found is with NVidia cards is that I had cuda 10.2 and it's incompatible with the install command (In the launch.py the torch_command variable at line 161)

torch_command = os.environ.get('TORCH_COMMAND', "pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113")

I needed to change it to:

torch_command = os.environ.get('TORCH_COMMAND', "pip install torch==1.12.0+cu102 torchvision==0.13.0+cu102 --extra-index-url https://download.pytorch.org/whl/cu102")

after removing the wrongly matched packages in venv.

One can use nvcc --version to check cuda version.

Some different cuda and pytorch version combinations are available here: https://pytorch.org/get-started/previous-versions/

crisu-art commented 1 year ago

Hi community, I get this error message. It is a slightly different that one on top, and I cannot solve it by adding "--skip-torch-cuda-test". I would be grateful about any advice how to solve it. Thank you.

Python 3.10.7 (tags/v3.10.7:6cc6b13, Sep 5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)] Commit hash: Traceback (most recent call last): File "C:\Stable Diffusion\stable-diffusion-webui-master\launch.py", line 98, in run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU'") File "C:\Stable Diffusion\stable-diffusion-webui-master\launch.py", line 50, in run_python return run(f'"{python}" -c "{code}"', desc, errdesc) File "C:\Stable Diffusion\stable-diffusion-webui-master\launch.py", line 44, in run raise RuntimeError(message) RuntimeError: Error running command. Command: "C:\Stable Diffusion\stable-diffusion-webui-master\venv\Scripts\python.exe" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU'" Error code: 1 stdout: stderr: Traceback (most recent call last): File "", line 1, in AssertionError: Torch is not able to use GPU

Nolasaurus commented 1 year ago

Hi community, I get this error message. It is a slightly different that one on top, and I cannot solve it by adding "--skip-torch-cuda-test". I would be grateful about any advice how to solve it. Thank you.

Python 3.10.7 (tags/v3.10.7:6cc6b13, Sep 5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)] Commit hash: Traceback (most recent call last): File "C:\Stable Diffusion\stable-diffusion-webui-master\launch.py", line 98, in run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU'") File "C:\Stable Diffusion\stable-diffusion-webui-master\launch.py", line 50, in run_python return run(f'"{python}" -c "{code}"', desc, errdesc) File "C:\Stable Diffusion\stable-diffusion-webui-master\launch.py", line 44, in run raise RuntimeError(message) RuntimeError: Error running command. Command: "C:\Stable Diffusion\stable-diffusion-webui-master\venv\Scripts\python.exe" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU'" Error code: 1 stdout: stderr: Traceback (most recent call last): File "", line 1, in AssertionError: Torch is not able to use GPU

In which file did you add that line?

sinanisler commented 1 year ago

check my message up there :)

dayndarksecure commented 1 year ago

venv "C:\StableDiffusion\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Commit hash: c361b89026442f3412162657f330d500b803e052 Traceback (most recent call last): File "C:\StableDiffusion\stable-diffusion-webui\launch.py", line 316, in prepare_environment() File "C:\StableDiffusion\stable-diffusion-webui\launch.py", line 228, in prepare_environment run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'") File "C:\StableDiffusion\stable-diffusion-webui\launch.py", line 89, in run_python return run(f'"{python}" -c "{code}"', desc, errdesc) File "C:\StableDiffusion\stable-diffusion-webui\launch.py", line 65, in run raise RuntimeError(message) RuntimeError: Error running command. Command: "C:\StableDiffusion\stable-diffusion-webui\venv\Scripts\python.exe" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'" Error code: 1 stdout: stderr: Traceback (most recent call last): File "", line 1, in AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

Press any key to continue . . .

please guide me.....

dspinellis commented 1 year ago

Upgrading the NNIDIA driver from 517.00 to 528.24 solved the problem for me.

ralphmccloud commented 1 year ago

I updated NVidia driver to 528.24 and still have the same problem.

mhgfdsjtfd commented 1 year ago

Any help here? I added --skip-torch-cuda-test to COMMANDLINE_ARGS, but its still not working.

venv "C:\AI\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Commit hash: 226d840e84c5f306350b0681945989b86760e616 Traceback (most recent call last): File "C:\AI\stable-diffusion-webui\launch.py", line 360, in prepare_environment() File "C:\AI\stable-diffusion-webui\launch.py", line 272, in prepare_environment run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'") File "C:\AI\stable-diffusion-webui\launch.py", line 129, in run_python return run(f'"{python}" -c "{code}"', desc, errdesc) File "C:\AI\stable-diffusion-webui\launch.py", line 105, in run raise RuntimeError(message) RuntimeError: Error running command. Command: "C:\AI\stable-diffusion-webui\venv\Scripts\python.exe" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'" Error code: 1 stdout: stderr: Traceback (most recent call last): File "", line 1, in AssertionError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

sevenrats commented 1 year ago

install cuda 11 or greater and reboot.

Vazno commented 1 year ago

Here is the fix:

Create own virtual environment for stable-diffusion-webui (not necessary)

Install everything from requirements.txt, except torch, which you need to get from https://download.pytorch.org/whl/torch/ with correlating CUDA version.

In my case (CUDA 11.7 + Windows + 64-bit), I installed: torch-1.13.1+cu117-cp310-cp310-win_amd64.whl cu117 in name stands for CUDA 11.7

If you want to quickly find out if the version is compatible with your system, run this:

python -m pip install link_to_whl_file

Make sure you are installing 1.13.1, older versions will not work correctly!

If you just run pip install torch (which is default), it's going to install 1.13.1+cpu without CUDA support.


To test, run this code:

import torch
print(torch.__version__)
print(torch.cuda.is_available())

It should print out:

1.13.1+cu117 # Your CUDA version
True
plasmaflower commented 1 year ago

launch.py code extracts the value of the variable "skip-torch-cuda-test" in line 250

sys.argv, skip_torch_cuda_test = extract_arg(sys.argv, '--skip-torch-cuda-test')

After that line I added new one "skip_torch_cuda_test=True"

and only that helped to goon

Mich-666 commented 1 year ago

Keep in mind that people who still use Windows 7 64 can't install higher than 11.4 CUDA, due to Nvidia drivers deliberately no longer being updated for them. So updating to torch 1.13+cu117 will cause this error for everyone on that system, basically locking Win7 users out.

The solution is to uninstall torch and go back to previous version with

pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113

If,, in the future, Automatic1111 drops their support for 1.12.1 completely (instead of warning as it is right now) it would mean Windows 7 users will no longer be able to use this repo at all.

ryuiiji commented 1 year ago

But this still doesn't solve an ensuing issue: that by adding --precision full --no-half, your SD will use the CPU instead of the GPU, which reduces your performance drastically, which defeats the entire purpose.

So the root issue that needs to be addressed is - why is pytorch not detecting the GPU in the first place?

It wont detect amd gpus because they do not support them in windows, it is testing its support in linux system using ROCm, jst like CUDA(for nVdia)

amd_rocm_blog

So as of now there aren't any support for amd gpus in windows, it only works in linux as of now

https://discuss.pytorch.org/t/how-to-run-torch-with-amd-gpu/157069

https://github.com/pytorch/pytorch/issues/10670#issuecomment-415067548

These might help

So unless ROCm launches on windows, amd gpu's are unusable

ROCm_not_available

I'm trying to use WSL(windows Subsystem for Linux)[Debian] as a workaround. If it works will update it.

Rabcor commented 1 year ago

Here is the fix:

Create own virtual environment for stable-diffusion-webui (not necessary)

Install everything from requirements.txt, except torch, which you need to get from https://download.pytorch.org/whl/torch/ with correlating CUDA version.

In my case (CUDA 11.7 + Windows + 64-bit), I installed: torch-1.13.1+cu117-cp310-cp310-win_amd64.whl cu117 in name stands for CUDA 11.7

If you want to quickly find out if the version is compatible with your system, run this:

python -m pip install link_to_whl_file

Make sure you are installing 1.13.1, older versions will not work correctly!

If you just run pip install torch (which is default), it's going to install 1.13.1+cpu without CUDA support.

To test, run this code:

import torch
print(torch.__version__)
print(torch.cuda.is_available())

It should print out:

1.13.1+cu117 # Your CUDA version
True

This was really it. Just go to pytorch.org to see what the latest version of pytorch is and the latest supported version of cuda is. Rebuild your native cuda installation for that version (that's probably the hardest part, depends on the OS how you go about it, it's fairly easy to just modify the pkgbuild for the official package if you're on arch though), then reinstall pytorch, though you can go a bit further than that.

source venv/bin/activate && python -m ensurepip --upgrade && python -m pip install --upgrade pip && python -m pip install -r requirements.txt

To rebuild related packages and

pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 --extra-index-url https://download.pytorch.org/whl/cu117

And that should secure your install.

paulxinzhou commented 1 year ago

@Rabcor

Do you have more details on using the virtual environment? venv or conda? Then do you need to modify the launch script since it seems to create an venv anyway?

VectorZero0 commented 1 year ago

Hola pido ayuda para ejecutar Stable Diffusion con la CPU ya leí toda la conversación pero no entiendo que archivo editar para que pueda usar stable diffusion sin gpu espero su ayuda muchas gracias!

pettdev commented 1 year ago

Hola pido ayuda para ejecutar Stable Diffusion con la CPU ya leí toda la conversación pero no entiendo que archivo editar para que pueda usar stable diffusion sin gpu espero su ayuda muchas gracias!

¡Hola! Claro que sí:

  1. Abre la carpeta stable-diffusion-webui.

  2. Dentro de la carpeta, vas a abrir un archivo llamado webui-user.sh con un editor de código, como por ejemplo VSCode, Atom, Brackets, SublimeText, o incluso Bloc de Notas, si estás en Windows.

  3. Una vez abierto con el editor de tu preferencia, irás a la siguiente línea: # Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="--medvram --opt-split-attention"

  4. Debajo de esa línea vas a encontrar ésta: # export COMMANDLINE_ARGS="" y dentro de las comillas vas a incluir --precision full --no-half --skip-torch-cuda-test. No olvides quitar el numeral, o almohadilla (#), sin dejar espacio al principio de dicha línea. Debe quedarte así: export COMMANDLINE_ARGS="--precision full --no-half --skip-torch-cuda-test"

  5. Guardas y cierras el archivo, y continúa con la consola.

Alternativamente puedes agregar --lowvram o --medvram si necesitas que consuma menos o mayor vram, y también puedes realizar estas acciones en webui-user.bat (si estás en Windows) si haber editado el webui-user.sh no funciona.

Saludos.

pettdev commented 1 year ago

I'm trying to use WSL(windows Subsystem for Linux)[Debian] as a workaround. If it works will update it.

Using WSL did work? Any updates? Thank you.

UoUoio commented 1 year ago

我是更新了显卡的驱动+默认显卡设置为独立显卡就解决了,下边是显卡驱动的下载地址 http://www.nvidia.com/Download/index.aspx

dezenhando commented 1 year ago

I have an AMD rx5700xt and with the right configuration the program runs flawlessly (gpu training still isn't completely supported but that's because it requires xformers and that's CUDA only). The wiki should be updated since the info I gathered was spread all over the repo inside other issues. In the launch.py the torch_command variable at line 161 needs to be edited to this: torch_command = os.environ.get(TORCH_COMMAND="pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.2/") (this allows the script to download torch and torchvision compatible for AMD cards). After that I created a small bash script (this will only work on linux):

!/bin/bash

source venv/bin/activate HSA_OVERRIDE_GFX_VERSION=10.3.0 python3 launch.py --precision full --no-half --listen --medvram --disable-safe-unpickle --enable-insecure-extension-access I saved this script as executor.sh inside the main folder. I hope that this can help some people.

I had to still issue the --skip-torch-cuda-test at the end of the executor script. Is this defeating the purpose of editing the launch.py. I am just wondering if it will still be using my CPU instead of GPU.

Hi millenial, I also have AMD RX5700XT, I am having problem to adjust SD to use this graphic card, have you succeded on using this graphic card ?

Vladimir0000000000 commented 1 year ago

В launch.py строка 15, измените наcommandline_args = os.environ.get('COMMANDLINE_ARGS', "--skip-torch-cuda-test"), таким образом, добавив --skip-torch-cuda-test в COMMANDLINE_ARGS, как указано в сообщении об ошибке.

Мне также пришлось добавить --precision full --no-half . ОДНАКО я не могу запустить это на графическом процессоре AMD 5700 XT, и по умолчанию используется только CPU. Похоже, что у многих других такая же проблема.

And where to find the file in which you need to replace? And how to do it? Please explain.

File "", line 1, in Assertion error: Torch cannot use GPU; add --skip-torch-cuda-test to the COMMANDLINE_ARGS variable to disable this check

Вот исправление:

Создайте собственную виртуальную среду для stable-diffusion-webui(не обязательно) Установите всеrequirements.txt, кроме torch, который вам нужно получить из https://download.pytorch.org/whl/torch / с соответствующей версией CUDA. В моем случае (CUDA 11.7 + Windows+ 64-bit) я установил: torch-1.13.1+cu117-cp310-cp310-win_amd64.whl cu117 в названии означает CUDA 11.7

Если вы хотите быстро узнать, совместима ли версия с вашей системой, запустите это:

python -m pip install link_to_whl_file

Убедитесь, что вы устанавливаете1.13.1, старые версии не будут работать корректно!

Если вы просто запустите pip install torch(по умолчанию), он будет установлен 1.13.1+cpu без поддержки CUDA. Для тестирования запустите этот код:

распечатайте  импорт torch
(torch.__версия__)
print(torch.cuda.is_available())

Он должен распечатать:

1.13.1+cu117 # Your CUDA version
True

Это было действительно так. Просто перейдите к pytorch.org чтобы узнать, какова последняя версия pytorch и последняя поддерживаемая версия cuda. Перестройте свою собственную установку cuda для этой версии (это, вероятно, самая сложная часть, зависит от ОС, как вы это делаете, довольно легко просто изменить pkgbuild для официального пакета, если вы используете arch), затем переустановите pytorch, хотя вы можете пойти немного дальше.

source venv/bin/activate && python -m ensurepip --upgrade && python -m pip install --upgrade pip && python -m pip install -r requirements.txt

Для восстановления связанных пакетов и

pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 --extra-index-url https://download.pytorch.org/whl/cu117

И это должно обезопасить вашу установку.

And where to find the file in which you need to replace? And how to do it? Please explain.

File "", line 1, in Assertion error: Torch cannot use GPU; add --skip-torch-cuda-test to the COMMANDLINE_ARGS variable to disable this check

micky2be commented 1 year ago

As many explained, torch need to be reinstalled with proper version. In fact it probably broke after a bad update (which nee to be fixed). What you wanna do is to make sure you are on the latest update. Restore your launch.py and webui-user.sh to the original. The delete the venv folder where all dependencies get installed. When starting webui again it will reinstall all dependencies with all the right versions. Not ideal but it should help having everything working again.

CHollman82 commented 1 year ago

As many explained, torch need to be reinstalled with proper version. In fact it probably broke after a bad update (which nee to be fixed). What you wanna do is to make sure you are on the latest update. Restore your launch.py and webui-user.sh to the original. The delete the venv folder where all dependencies get installed. When starting webui again it will reinstall all dependencies with all the right versions. Not ideal but it should help having everything working again.

Thank you, I updated and suddenly nothing was working and I was just getting tons of errors trying to do anything, this fixed it for me.

chaewai commented 1 year ago

I just started getting this error even though I was running SD fine this morning. All I can guess is that updating some extensions broke it...? Or maybe a Windows Update I didn't notice changed my CUDA version? I have not updated the webui version in weeks; issue is I have to run on the old torch version because the new one breaks training for me and runs out of memory all the time, so just updating everything isn't a great fix.

thebreaker42 commented 1 year ago

launch.py

change this commandline_args = os.environ.get('COMMANDLINE_ARGS', "")

with this commandline_args = os.environ.get('COMMANDLINE_ARGS', "--skip-torch-cuda-test")

Thank you so much, it worked! Now it's downloading models. Now it's gonna use my cpu instead of my gpu right? So, what is the difference? Generating process is gonna be slower? I have a ryzen 5 5600g.

edit: i have a runtimeerror: "LayerNormKernelImpl" not implemented for 'Half' erro probablt it's because i have a amd gpu. How can i able this to run on my rx6600 gpu or force my cpu?

jieming1113 commented 1 year ago

actually, skip torch will only use CPU, I think it not a good idea. For my solution, delete venv file, and in cmd run webui-user.bat. Then it work my computer: windows 10, GTX2080TI

GramThanos commented 1 year ago

actually, skip torch will only use CPU, I think it not a good idea. For my solution, delete venv file, and in cmd run webui-user.bat. Then it work my computer: windows 10, GTX2080TI

The same approach worked for me. If it was working for you in the past but stopped working after an update, it is probably a dependency error, thus you can redownload all the python libraries by:

  1. rename venv folder to venv_old
  2. run again webui-user.bat and wait for all the downloads to complete
  3. check if the problem was solved
  4. delete the venv_old folder

Some more info, make sure you have CUDA Toolkit installed.

UjjwalD77 commented 1 year ago

I'm trying to use WSL(windows Subsystem for Linux)[Debian] as a workaround. If it works will update it.

Any updates?

asrul10 commented 1 year ago

I tried following this guide, and it's running with my AMD GPU: How to Fully Setup Linux To Run AUTOMATIC1111 Stable Diffusion Locally On An AMD GPU

In my case, my machine has not installed AMD ROCm yet, so I just installed the AMD ROCm by following this guide

leitingx762 commented 1 year ago

so...its can't work on AMD GPU for Windows? only support for Linux...

TheWingAg commented 1 year ago

me to, i use RX6800 and I tried following this guide on link: "https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs." So i done it. U can try!

antnesswcm commented 1 year ago

我最初可以使用gpu(NVIDIA)运行,在尝试修改参数使用cpu运行以后,重置参数无法再次使用gpu并出现上述报错(在过程中我可能还手动重装了torch的版本)


以下是我的解决办法 1)删除torchvision pytorch-lightning torch(思路上可以参考删除相关的库) 2)在终端执行web-user.bat,他会自动下载缺失的库并且会匹配版本和来源仓库,这是手动pip install难以做到的 3)如果没有错误,会正常运行模型并开启web服务,可以跑一个图,看看gpu内存占用 如果还是无法使用gpu运行,尝试: 1)删除venv目录 2)运行web-user.bat


请注意,我最初可以使用gpu运行,只是不小心破坏了运行环境,在执行解决方案时应该知道你在做什么 祝你好运~

I also made a mistake like this I was initially able to run using gpu (NVIDIA), but after trying to modify the parameters and run using cpu, I was unable to use gpu again after resetting the parameters and reported the above error (I may also manually reinstall the version of torch during the process).


Here is my solution 1) Deletetorchvision pytorch-lightning torch (You can refer to the deletion of related libraries in your mind) 2) "When executing web-user.bat on the terminal, it will automatically download the missing library and match the version and source repository, which is difficult for manual pip installation to achieve." 3) If there are no errors, the model will run normally and the web service will be started. You can run a graph to see the gpu memory usage

If you still cannot run using the gpu, try: 1) Delete the venv directory 2) Run web user. bat


Please note that I can initially use the GPU to run, but I accidentally damaged the running environment. When implementing the solution, I should know what you are doing Good luck to you~