bmaltais / kohya_ss

Apache License 2.0
9.54k stars 1.23k forks source link

did you forget to build xformers #761

Closed jxzhang789 closed 8 months ago

jxzhang789 commented 1 year ago

When starting to train LoRa, I got the following prompt error Traceback (most recent call last): File "/home/ubuntu/lora/kohya_ss/train_network.py", line 773, in train(args) File "/home/ubuntu/lora/kohya_ss/train_network.py", line 605, in train noise_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample File "/home/ubuntu/lora/kohya_ss/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, kwargs) File "/home/ubuntu/lora/kohya_ss/venv/lib/python3.10/site-packages/accelerate/utils/operations.py", line 495, in call return convert_to_fp32(self.model_forward(*args, *kwargs)) File "/home/ubuntu/lora/kohya_ss/venv/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 12, in decorate_autocast return func(args, kwargs) File "/home/ubuntu/lora/kohya_ss/venv/lib/python3.10/site-packages/diffusers/models/unet_2d_condition.py", line 381, in forward sample, res_samples = downsample_block( File "/home/ubuntu/lora/kohya_ss/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, kwargs) File "/home/ubuntu/lora/kohya_ss/venv/lib/python3.10/site-packages/diffusers/models/unet_2d_blocks.py", line 612, in forward hidden_states = attn(hidden_states, encoder_hidden_states=encoder_hidden_states).sample File "/home/ubuntu/lora/kohya_ss/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, *kwargs) File "/home/ubuntu/lora/kohya_ss/venv/lib/python3.10/site-packages/diffusers/models/attention.py", line 216, in forward hidden_states = block(hidden_states, context=encoder_hidden_states, timestep=timestep) File "/home/ubuntu/lora/kohya_ss/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(input, kwargs) File "/home/ubuntu/lora/kohya_ss/venv/lib/python3.10/site-packages/diffusers/models/attention.py", line 484, in forward hidden_states = self.attn1(norm_hidden_states) + hidden_states File "/home/ubuntu/lora/kohya_ss/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/ubuntu/lora/kohya_ss/library/train_util.py", line 1845, in forward_xformers out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None) # 最適なのを選んでくれる File "/home/ubuntu/lora/kohya_ss/venv/lib/python3.10/site-packages/xformers/ops.py", line 865, in memory_efficient_attention return op.apply(query, key, value, attn_bias, p).reshape(output_shape) File "/home/ubuntu/lora/kohya_ss/venv/lib/python3.10/site-packages/xformers/ops.py", line 319, in forward out, lse = cls.FORWARD_OPERATOR( File "/home/ubuntu/lora/kohya_ss/venv/lib/python3.10/site-packages/xformers/ops.py", line 46, in no_such_operator raise RuntimeError( RuntimeError: No such operator xformers::efficient_attention_forward_cutlass - did you forget to build xformers with python setup.py develop? steps: 0%| | 0/2900 [00:00<?, ?it/s] Traceback (most recent call last): File "/home/ubuntu/lora/kohya_ss/venv/bin/accelerate", line 8, in sys.exit(main()) File "/home/ubuntu/lora/kohya_ss/venv/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 45, in main args.func(args) File "/home/ubuntu/lora/kohya_ss/venv/lib/python3.10/site-packages/accelerate/commands/launch.py", line 923, in launch_command simple_launcher(args) File "/home/ubuntu/lora/kohya_ss/venv/lib/python3.10/site-packages/accelerate/commands/launch.py", line 579, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['/home/ubuntu/lora/kohya_ss/venv/bin/python', 'train_network.py', '--enable_bucket', '--pretrained_model_name_or_path=/home/ubuntu/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors', '--train_data_dir=/home/ubuntu/lora/PFM_0505/image', '--resolution=512,512', '--output_dir=/home/ubuntu/lora/PFM_0505/model', '--logging_dir=/home/ubuntu/lora/PFM_0505/log', '--network_alpha=1', '--save_model_as=safetensors', '--network_module=networks.lora', '--text_encoder_lr=5e-05', '--unet_lr=0.0001', '--network_dim=8', '--output_name=last', '--lr_scheduler_num_cycles=1', '--learning_rate=0.0001', '--lr_scheduler=cosine', '--lr_warmup_steps=290', '--train_batch_size=1', '--max_train_steps=2900', '--save_every_n_epochs=1', '--mixed_precision=fp16', '--save_precision=fp16', '--cache_latents', '--optimizer_type=AdamW8bit', '--max_data_loader_n_workers=0', '--bucket_reso_steps=64', '--xformers', '--bucket_no_upscale']' returned non-zero exit status 1.

bmaltais commented 1 year ago

Strange. Is this from the latest release? Try re-running setup.bat again

Changesong commented 1 year ago

I have the same problem. I ran it with on a Linux server.

Beyond5229 commented 1 year ago

I have the same error,I solved it by following steps ` cd to ./venv/bin ./pip install xformers==0.0.19 //This commd while update torch to 2.0.0

CUDA117: ./pip install torch==2.0.0+cu117 torchvision==0.15.1+cu117 torchaudio==2.0.1 --index-url https://download.pytorch.org/whl/cu117

` https://pytorch.org/get-started/previous-versions/

lihuihui-bj commented 1 year ago

re-runed setup.bat. Still got error. seems version conficts: xformers==0.0.19 needs torch==2.0.0 but torchvision 0.15.2+cu118 requires torch==2.0.1