bmaltais / kohya_ss

Apache License 2.0
9.49k stars 1.22k forks source link

ValueError: torch.cuda.is_available() should be True but is False. xformers' memory efficient attention is only available for GPU #1610

Closed abozahran closed 12 months ago

abozahran commented 12 months ago

[Dataset 0] loading image sizes. 100%|██████████████████████████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 2046.67it/s] prepare dataset prepare accelerator loading model for process 0/1 load StableDiffusion checkpoint: F:\DB\SDXL.safetensors building U-Net loading U-Net from checkpoint U-Net: building text encoders loading text encoders from checkpoint text encoder 1: text encoder 2: building VAE loading VAE from checkpoint VAE: Disable Diffusers' xformers Enable xformers for U-Net Traceback (most recent call last): File "F:\KOHYA\22.0.1\sdxl_train.py", line 753, in train(args) File "F:\KOHYA\22.0.1\sdxl_train.py", line 257, in train vae.set_use_memory_efficient_attention_xformers(args.xformers) File "F:\KOHYA\22.0.1\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 251, in set_use_memory_efficient_attention_xformers fn_recursive_set_mem_eff(module) File "F:\KOHYA\22.0.1\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 247, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "F:\KOHYA\22.0.1\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 247, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "F:\KOHYA\22.0.1\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 247, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "F:\KOHYA\22.0.1\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 244, in fn_recursive_set_mem_eff module.set_use_memory_efficient_attention_xformers(valid, attention_op) File "F:\KOHYA\22.0.1\venv\lib\site-packages\diffusers\models\attention_processor.py", line 203, in set_use_memory_efficient_attention_xformers raise ValueError( ValueError: torch.cuda.is_available() should be True but is False. xformers' memory efficient attention is only available for GPU Traceback (most recent call last): File "C:\Users\zahran\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\zahran\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "F:\KOHYA\22.0.1\venv\Scripts\accelerate.exe__main__.py", line 7, in File "F:\KOHYA\22.0.1\venv\lib\site-packages\accelerate\commands\accelerate_cli.py", line 47, in main args.func(args) File "F:\KOHYA\22.0.1\venv\lib\site-packages\accelerate\commands\launch.py", line 986, in launch_command simple_launcher(args) File "F:\KOHYA\22.0.1\venv\lib\site-packages\accelerate\commands\launch.py", line 628, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['F:\KOHYA\22.0.1\venv\Scripts\python.exe', './sdxl_train.py', '--pretrained_model_name_or_path=F:\DB\SDXL.safetensors', '--train_data_dir=F:\DB\SDXL\SALOMAV1\IMG', '--reg_data_dir=E:\iloveimgconverted\reg\1_PERSON', '--resolution=1024,1024', '--output_dir=F:\DB\SDXL\SALOMAV1', '--logging_dir=F:\DB\SDXL\SALOMAV1\LOG', '--save_model_as=safetensors', '--full_bf16', '--output_name=SALOMAV1-DB-SDXL-V1', '--lr_scheduler_num_cycles=8', '--max_data_loader_n_workers=0', '--learning_rate=1e-05', '--lr_scheduler=constant', '--train_batch_size=1', '--max_train_steps=11520', '--save_every_n_epochs=1', '--mixed_precision=bf16', '--save_precision=bf16', '--cache_latents', '--cache_latents_to_disk', '--optimizer_type=Adafactor', '--optimizer_args', 'scale_parameter=False', 'relative_step=False', 'warmup_init=False', 'weight_decay=0.01', '--max_data_loader_n_workers=0', '--bucket_reso_steps=64', '--gradient_checkpointing', '--xformers', '--bucket_no_upscale', '--noise_offset=0.0']' returned non-zero exit status 1.

abozahran commented 12 months ago

I fixed the problem. I found out it's the gpu setting. I ran 'accelerate config', at 'what gpu to use' part type in 'all'. ('ALL' in lower-case)

sarojkumarss commented 11 months ago

Thanks abozahran I used the method and works for me