echonoshy / cgft-llm

Practice to LLM.
MIT License
456 stars 74 forks source link

多卡环境遇到一个报错信息 #3

Closed schzyf closed 4 months ago

schzyf commented 5 months ago

[WARNING|logging.py:329] 2024-06-14 18:45:29,004 >> Not an error, but Unsloth cannot patch MLP layers with our manual autograd engine since either LoRA adapters are not enabled or a bias term (like in Qwen) is used. [WARNING|logging.py:329] 2024-06-14 18:45:29,004 >> Not an error, but Unsloth cannot patch Attention layers with our manual autograd engine since either LoRA adapters are not enabled or a bias term (like in Qwen) is used. [WARNING|logging.py:329] 2024-06-14 18:45:29,004 >> Not an error, but Unsloth cannot patch O projection layer with our manual autograd engine since either LoRA adapters are not enabled or a bias term (like in Qwen) is used. [WARNING|logging.py:329] 2024-06-14 18:45:29,005 >> Unsloth 2024.6 patched 32 layers with 0 QKV layers, 0 O layers and 0 MLP layers. Not an error, but Unsloth cannot patch MLP layers with our manual autograd engine since either LoRA adapters are not enabled or a bias term (like in Qwen) is used. Not an error, but Unsloth cannot patch Attention layers with our manual autograd engine since either LoRA adapters are not enabled or a bias term (like in Qwen) is used. Not an error, but Unsloth cannot patch O projection layer with our manual autograd engine since either LoRA adapters are not enabled or a bias term (like in Qwen) is used. Unsloth 2024.6 patched 32 layers with 0 QKV layers, 0 O layers and 0 MLP layers. 06/14/2024 18:45:29 - INFO - llamafactory.model.loader - trainable params: 3407872 || all params: 8033669120 || trainable%: 0.0424 [INFO|trainer.py:641] 2024-06-14 18:45:29,957 >> Using auto half precision backend [WARNING|logging.py:329] 2024-06-14 18:45:30,297 >> * Our OSS was designed for people with few GPU resources to level the playing field.

echonoshy commented 5 months ago

看起来像是unsloth不兼容导致的,你训练的时候不要用unsloth做优化。

schzyf commented 4 months ago

多卡环境下 use_unsloth 这个 改成false了,就正常了