hiyouga / LLaMA-Factory

Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
30.73k stars 3.79k forks source link

qwen1.5-7b-chat,使用longlora,按照论文的方案,把norm和emb也加入微调参数,但是报错了 ValueError: Target module Qwen2RMSNorm() is not supported. Currently, only the following modules are supported: `torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv2d`, `transformers.pytorch_utils.Conv1D`. #3720

Closed cat-knight closed 4 months ago

cat-knight commented 4 months ago

Reminder

Reproduction

如题:命令如下 deepspeed --num_gpus 2 src/train_bash.py \ --deepspeed ./examples/deepspeed/ds_z3_config.json \ --stage sft \ --do_train \ --model_name_or_path /root/autodl-fs/qwen/Qwen1___5-7B-Chat \ --dataset chat_bi \ --dataset_dir data \ --template default \ --finetuning_type lora \ --lora_target q_proj,v_proj,norm,emb \ --output_dir saves/qwen1_5_7B/lora/sft \ --overwrite_cache \ --overwrite_output_dir \ --cutoff_len 8192 \ --preprocessing_num_workers 16 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 1 \ --lr_scheduler_type cosine \ --logging_steps 10 \ --warmup_steps 20 \ --save_steps 500 \ --eval_steps 500 \ --evaluation_strategy steps \ --learning_rate 1e-5 \ --shift_attn

Expected behavior

No response

System Info

No response

Others

No response

hiyouga commented 4 months ago

longlora 仅支持 llama 模型,另外 norm 要放到 additional_target 里面