d8ahazard / sd_dreambooth_extension

Other
1.85k stars 281 forks source link

[Bug]: ValueError: torch.cuda.is_available() should be True but is False. xformers' memory efficient attention is only available for GPU #1362

Closed OsmondFan closed 9 months ago

OsmondFan commented 9 months ago

Is there an existing issue for this?

What happened?

Dreambooth requests for xformers to be added to the command line configuration at the start of the program(in webui-user.sh), however the program continuously returns: No module 'xformers'(when you don't add it in command line) or ValueError: torch.cuda.is_available() should be True but is False. xformers' memory efficient attention is only available for GPU(when you add it in command line).

Steps to reproduce the problem

  1. With Xformers: go to webui-user.sh, change the command line args to export COMMANDLINE_ARGS="--xformers --skip-torch-cuda-test --precision full --no-half --medvram --opt-sub-quad-attention". Then reboot Automatic.
  2. Without Xformers: go to webui-user.sh, change the command line args to export COMMANDLINE_ARGS="--skip-torch-cuda-test --precision full --no-half --medvram --opt-sub-quad-attention". Then reboot Automatic.

Commit and libraries

[+] xformers version 0.0.22 installed. [+] torch version 2.0.1 installed. [+] torchvision version 0.15.2 installed. [+] accelerate version 0.23.0 installed. [+] diffusers version 0.21.4 installed. [+] transformers version 4.32.1 installed. [+] bitsandbytes version 0.41.1 installed.

Launching Web UI with arguments: --xformers --skip-torch-cuda-test --precision full --no-half --medvram Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled

Command Line Arguments

export COMMANDLINE_ARGS="--skip-torch-cuda-test --precision full --no-half --medvram --opt-sub-quad-attention"

or

export COMMANDLINE_ARGS="--xformers --skip-torch-cuda-test --precision full --no-half --medvram --opt-sub-quad-attention"

Console logs

Loading pipeline components...: 100%|█████████████| 7/7 [00:02<00:00,  3.09it/s]
Traceback (most recent call last):%|                     | 0/10 [00:00<?, ?it/s]
  File "/Users/osmond/Desktop/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/ui_functions.py", line 730, in start_training
    result = main(class_gen_method=class_gen_method)
  File "/Users/osmond/Desktop/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py", line 1809, in main
    return inner_loop()
  File "/Users/osmond/Desktop/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/memory.py", line 126, in decorator
    return function(batch_size, grad_size, prof, *args, **kwargs)
  File "/Users/osmond/Desktop/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py", line 279, in inner_loop
    count, instance_prompts, class_prompts = generate_classifiers(
  File "/Users/osmond/Desktop/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/utils/gen_utils.py", line 168, in generate_classifiers
    builder = ImageBuilder(
  File "/Users/osmond/Desktop/stable-diffusion-webui/extensions/sd_dreambooth_extension/helpers/image_builder.py", line 110, in __init__
    self.image_pipe.enable_xformers_memory_efficient_attention()
  File "/Users/osmond/anaconda3/envs/sd/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 1752, in enable_xformers_memory_efficient_attention
    self.set_use_memory_efficient_attention_xformers(True, attention_op)
  File "/Users/osmond/anaconda3/envs/sd/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 1778, in set_use_memory_efficient_attention_xformers
    fn_recursive_set_mem_eff(module)
  File "/Users/osmond/anaconda3/envs/sd/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 1768, in fn_recursive_set_mem_eff
    module.set_use_memory_efficient_attention_xformers(valid, attention_op)
  File "/Users/osmond/anaconda3/envs/sd/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 251, in set_use_memory_efficient_attention_xformers
    fn_recursive_set_mem_eff(module)
  File "/Users/osmond/anaconda3/envs/sd/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 247, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "/Users/osmond/anaconda3/envs/sd/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 247, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "/Users/osmond/anaconda3/envs/sd/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 247, in fn_recursive_set_mem_eff
    fn_recursive_set_mem_eff(child)
  File "/Users/osmond/anaconda3/envs/sd/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 244, in fn_recursive_set_mem_eff
    module.set_use_memory_efficient_attention_xformers(valid, attention_op)
  File "/Users/osmond/anaconda3/envs/sd/lib/python3.10/site-packages/diffusers/models/attention_processor.py", line 203, in set_use_memory_efficient_attention_xformers
    raise ValueError(
ValueError: torch.cuda.is_available() should be True but is False. xformers' memory efficient attention is only available for GPU 
Generating class images 0/10::   0%|                     | 0/10 [00:02<?, ?it/s]
Duration: 00:00:03
[2023-10-03 18:55:59,697][DEBUG][dreambooth.utils.model_utils] - Restored system models.
Duration: 00:00:04

Additional information

This problem seems to occur on any macOS device. Right now I am using Mac Studio M1 Ultra 48 Cores and 64Gb RAM.

github-actions[bot] commented 9 months ago

This issue is stale because it has been open 5 days with no activity. Remove stale label or comment or this will be closed in 5 days

eleijonmarck commented 8 months ago

@OsmondFan did you ever get a correction for this?