PaddlePaddle / PaddleMIX

Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks, including end-to-end large-scale multi-modal pretrain models and diffusion model toolbox. Equipped with high performance and flexibility.
Apache License 2.0
297 stars 113 forks source link

【训练报错】RuntimeError: (NotFound) The kernel `memory_efficient_attention` is not registered. #313

Closed Olive-2019 closed 9 months ago

Olive-2019 commented 10 months ago

环境

aistudio paddlepaddle-gpu==2.5.2.post102 python3(.8) cudnn7.6

执行代码

在上述的目录下新建train.sh,内容为:

export FLAG_FUSED_LINEAR=0
export FLAGS_conv_workspace_size_limit=4096
# 是否开启 ema
export FLAG_USE_EMA=0
# 是否开启 recompute
export FLAG_RECOMPUTE=1
# 是否开启 xformers
export FLAG_XFORMERS=1

# 如果使用自定义数据
# FILE_LIST=./processed_data/filelist/custom_dataset.filelist.list
# 如果使用laion400m_demo数据集,需要把下面的注释取消
FILE_LIST=./data/filelist/train.filelist.list

python -u train_txt2img_laion400m_trainer.py \
    --do_train \
    --output_dir ./laion400m_pretrain_output_trainer \
    --per_device_train_batch_size 32 \
    --gradient_accumulation_steps 1 \
    --learning_rate 1e-4 \
    --weight_decay 0.01 \
    --max_steps 200000 \
    --lr_scheduler_type "constant" \
    --warmup_steps 0 \
    --image_logging_steps 1000 \
    --logging_steps 10 \
    --resolution 256 \
    --save_steps 10000 \
    --save_total_limit 20 \
    --seed 23 \
    --dataloader_num_workers 4 \
    --vae_name_or_path CompVis/stable-diffusion-v1-4/vae \
    --text_encoder_name_or_path CompVis/stable-diffusion-v1-4/text_encoder \
    --unet_name_or_path ./sd/unet_config.json \
    --file_list ${FILE_LIST} \
    --model_max_length 77 \
    --max_grad_norm -1 \
    --disable_tqdm True \
    --bf16 False

使用sh train.sh运行

报错信息

Traceback (most recent call last):
  File "train_txt2img_laion400m_trainer.py", line 123, in <module>
    main()
  File "train_txt2img_laion400m_trainer.py", line 116, in main
    trainer.train(resume_from_checkpoint=checkpoint)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddlenlp/trainer/trainer.py", line 795, in train
    tr_loss_step = self.training_step(model, inputs)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddlenlp/trainer/trainer.py", line 1719, in training_step
    loss = self.compute_loss(model, inputs)
  File "/home/aistudio/PaddleMIX/ppdiffusers/examples/stable_diffusion/sd/sd_trainer.py", line 250, in compute_loss
    loss = model(**inputs)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/nn/layer/layers.py", line 1254, in __call__
    return self.forward(*inputs, **kwargs)
  File "/home/aistudio/PaddleMIX/ppdiffusers/examples/stable_diffusion/sd/model.py", line 143, in forward
    latents = self.vae.encode(pixel_values).latent_dist.sample()
  File "/home/aistudio/.local/lib/python3.8/site-packages/ppdiffusers/models/autoencoder_kl.py", line 249, in encode
    h = self.encoder(x)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/nn/layer/layers.py", line 1254, in __call__
    return self.forward(*inputs, **kwargs)
  File "/home/aistudio/.local/lib/python3.8/site-packages/ppdiffusers/models/vae.py", line 144, in forward
    sample = self.mid_block(sample)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/nn/layer/layers.py", line 1254, in __call__
    return self.forward(*inputs, **kwargs)
  File "/home/aistudio/.local/lib/python3.8/site-packages/ppdiffusers/models/unet_2d_blocks.py", line 550, in forward
    hidden_states = attn(hidden_states, temb=temb)
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/nn/layer/layers.py", line 1254, in __call__
    return self.forward(*inputs, **kwargs)
  File "/home/aistudio/.local/lib/python3.8/site-packages/ppdiffusers/models/attention_processor.py", line 282, in forward
    return self.processor(
  File "/home/aistudio/.local/lib/python3.8/site-packages/ppdiffusers/models/attention_processor.py", line 956, in __call__
    hidden_states = F.scaled_dot_product_attention_(
  File "/home/aistudio/.local/lib/python3.8/site-packages/ppdiffusers/patches/ppnlp_patch_utils.py", line 418, in scaled_dot_product_attention_
    output = memory_efficient_attention(
  File "/home/aistudio/.local/lib/python3.8/site-packages/paddle/incubate/nn/memory_efficient_attention.py", line 103, in memory_efficient_attention
    output, logsumexp, seed_and_offset = _C_ops.memory_efficient_attention(
RuntimeError: (NotFound) The kernel `memory_efficient_attention` is not registered.
  [Hint: Expected iter != kernels_.end(), but received iter == kernels_.end().] (at ../paddle/phi/core/kernel_factory.cc:174)
JunnYu commented 9 months ago

你好请使用paddlepaddle-gpu==2.5.2.post11.7的paddle,建议升级到11.2+

Olive-2019 commented 9 months ago

你好请使用paddlepaddle-gpu==2.5.2.post11.7的paddle,建议升级到11.2+

感谢您的回复,但是使用11.7版本会报另一个错误:cudnn版本不适配,aistudio中的cudnn是7+的,而11.2+的cuda版本需要cudnn8+。aistudio安装8+的cudnn需要sudo权限

JunnYu commented 9 months ago

你好,当前aistudio上的环境应该都是python310、cuda118以上的那个好像?你可以尝试咨询一下如何换一下?

Olive-2019 commented 9 months ago

群里大佬提供了解决方案:创建项目时选择cuda版本