ymcui / Chinese-LLaMA-Alpaca-2

中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)
Apache License 2.0
7k stars 570 forks source link

模型预训练时的labels问题 #565

Closed ybch14 closed 1 month ago

ybch14 commented 2 months ago

提交前必须检查以下项目

问题类型

模型训练与精调

基础模型

Chinese-LLaMA-2 (7B/13B)

操作系统

Linux

详细描述问题

scripts/training/run_clm_pt_with_peft.py Line 502 中,result["labels"] = result["input_ids"].copy()把label和input_ids设置成完全相同。但预训练过程应该是预测下一个词,所以是否应该将labels右移一位呢?或者是否右移一位的操作已经在transformers或PEFT里已经实现了,所以这里才这样写的呢?谢谢!

依赖情况(代码类问题务必提供)

# 请在此处粘贴依赖情况(请粘贴在本代码块里)

运行日志或截图

# 请在此处粘贴运行日志(请粘贴在本代码块里)
github-actions[bot] commented 1 month ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

github-actions[bot] commented 1 month ago

Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.