ymcui / Chinese-LLaMA-Alpaca

中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki
Apache License 2.0
18.23k stars 1.86k forks source link

lora微调train loss下降,eval loss不变 #856

Closed ZoeyChen-lab closed 11 months ago

ZoeyChen-lab commented 11 months ago

提交前必须检查以下项目

问题类型

模型训练与精调

基础模型

LLaMA-13B

操作系统

Linux

详细描述问题

在一开始运行的时候显示loss没有梯度下降,所以我在model forward中加了loss = loss.requiresgrad()后才能把训练跑起来。 但是训练过程中train loss有下降,eval loss没有变化,有人遇到过类似的问题吗?

依赖情况(代码类问题务必提供)

peft 0.5.0 torch 2.1.0 torchaudio 0.11.0+cu113 torchvision 0.12.0+cu113 transformers 4.28.1

运行日志或截图

在原有的代码里增加一下代码: lora_config = LoraConfig(peft_type="LORA",task_type="SEQ_2_SEQ_LM",r=8,lora_alpha=32,target_modules=["q", "v"],lora_dropout=0.01, inference_mode=False) model = get_peft_model(model, lora_config) for name, para in model.named_parameters(): if "lora" in name: para.requiresgrad(True) else: para.requiresgrad(False)

========================================================= "log_history": [ { "epoch": 0.69, "learning_rate": 5e-05, "loss": 8.2967, "step": 500 }, { "epoch": 1.0, "eval_loss": 6.520833492279053, "eval_runtime": 6.7135, "eval_samples_per_second": 1.341, "eval_steps_per_second": 0.149, "step": 720 }, { "epoch": 1.39, "learning_rate": 5e-05, "loss": 8.2952, "step": 1000 }, { "epoch": 2.0, "eval_loss": 6.520833492279053, "eval_runtime": 8.9157, "eval_samples_per_second": 1.009, "eval_steps_per_second": 0.112, "step": 1440 }, { "epoch": 2.08, "learning_rate": 5e-05, "loss": 8.2847, "step": 1500 }, { "epoch": 2.78, "learning_rate": 5e-05, "loss": 8.298, "step": 2000 }, { "epoch": 3.0, "eval_loss": 6.520833492279053, "eval_runtime": 6.7706, "eval_samples_per_second": 1.329, "eval_steps_per_second": 0.148, "step": 2160 } ]

github-actions[bot] commented 11 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

github-actions[bot] commented 11 months ago

Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.