derrian-distro / LoRA_Easy_Training_Scripts

A UI made in Pyside6 to make training LoRA/LoCon and other LoRA type models in sd-scripts easy
GNU General Public License v3.0
1.06k stars 103 forks source link

Failed to train because of error #161

Closed Poliwhirl0 closed 7 months ago

Poliwhirl0 commented 10 months ago

[Dataset 0] loading image sizes. 100%|████████████████████████████████████████████████████████████████████████████████| 45/45 [00:00<00:00, 3759.91it/s] prepare dataset preparing accelerator loading model for process 0/1 load StableDiffusion checkpoint: E:/AI related/stable diffusion/sd-webui-aki/sd-webui-aki-v4.2/sd-webui-aki-v4.2/models/Stable-diffusion/Stable Diffusionv1-5-pruned.ckpt UNet2DConditionModel: 64, 8, 768, False, False loading u-net: loading vae: loading text encoder: Enable xformers for U-Net Traceback (most recent call last): File "E:\AI related\LoRA_Easy_Training_Scripts-SDXL\sd_scripts\train_network.py", line 990, in trainer.train(args) File "E:\AI related\LoRA_Easy_Training_Scripts-SDXL\sd_scripts\train_network.py", line 222, in train vae.set_use_memory_efficient_attention_xformers(args.xformers) File "E:\AI related\LoRA_Easy_Training_Scripts-SDXL\sd_scripts\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 227, in set_use_memory_efficient_attention_xformers fn_recursive_set_mem_eff(module) File "E:\AI related\LoRA_Easy_Training_Scripts-SDXL\sd_scripts\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 223, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "E:\AI related\LoRA_Easy_Training_Scripts-SDXL\sd_scripts\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 223, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "E:\AI related\LoRA_Easy_Training_Scripts-SDXL\sd_scripts\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 223, in fn_recursive_set_mem_eff fn_recursive_set_mem_eff(child) File "E:\AI related\LoRA_Easy_Training_Scripts-SDXL\sd_scripts\venv\lib\site-packages\diffusers\models\modeling_utils.py", line 220, in fn_recursive_set_mem_eff module.set_use_memory_efficient_attention_xformers(valid, attention_op) File "E:\AI related\LoRA_Easy_Training_Scripts-SDXL\sd_scripts\venv\lib\site-packages\diffusers\models\attention_processor.py", line 200, in set_use_memory_efficient_attention_xformers raise ValueError( ValueError: torch.cuda.is_available() should be True but is False. xformers' memory efficient attention is only available for GPU Failed to train because of error: Command '['E:\AI related\LoRA_Easy_Training_Scripts-SDXL\sd_scripts\venv\Scripts\python.exe', 'sd_scripts\train_network.py', '--config_file=runtime_store\config.toml', '--dataset_config=runtime_store\dataset.toml']' returned non-zero exit status 1.

Jelosus2 commented 10 months ago

You may want to check this Stackoverflow article, it may help

Poliwhirl0 commented 10 months ago

You may want to check this Stackoverflow article, it may help

I tried Kohya_ss and it worked fine, so I assume it's not a version problem since this script is based on kohya, thanks though.

Jelosus2 commented 10 months ago

You may want to check this Stackoverflow article, it may help

I tried Kohya_ss and it worked fine, so I assume it's not a version problem since this script is based on kohya, thanks though.

Np, also ,to clarify, kohya are the scripts for training, bmaltais is the GUI

Woisek commented 10 months ago

Np, also ,to clarify, kohya are the scripts for training, bmaltais is the GUI

Bernard Maltais is the user who developed kohya_ss GUI ...

Jelosus2 commented 10 months ago

Np, also ,to clarify, kohya are the scripts for training, bmaltais is the GUI

Bernard Maltais is the user who developed kohya_ss GUI ...

yea, but people usually says bmaltais trainer, like they say "derrian's trainer" for this one xD