ymcui / Chinese-LLaMA-Alpaca-3

中文羊驼大模型三期项目 (Chinese Llama-3 LLMs) developed from Meta Llama 3
Apache License 2.0
1.57k stars 142 forks source link

在微调时,如果输入的prompt太长超过max_seq_length时,prompt最后的结束符会被丢弃 #76

Closed seal-wang closed 2 months ago

seal-wang commented 3 months ago

提交前必须检查以下项目

问题类型

模型训练与精调

基础模型

None

操作系统

Linux

详细描述问题

在这里 https://github.com/ymcui/Chinese-LLaMA-Alpaca-3/blob/f8df26ea288ad9675ee99a369cd49aee37c817bd/scripts/training/build_dataset.py#L32 会添加一个结束符,但是如果prompt长度大于max_seq_length时,会在这里 https://github.com/ymcui/Chinese-LLaMA-Alpaca-3/blob/f8df26ea288ad9675ee99a369cd49aee37c817bd/scripts/training/build_dataset.py#L44 将结束符丢弃掉。 这个丢失操作会使模型在微调阶段无法学到合适的结束符,反应到生成阶段会导致模型生成无法停止。 Chinese-LLaMA-Alpaca-1和Chinese-LLaMA-Alpaca-2也存在这样的问题,之所以最终模型生成时没问题,可能是因为max_seq_length比较大,可以满足大部分微调样本的长度需要。 不知道您有没有发现这个问题?还是我理解有错误?

github-actions[bot] commented 3 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

github-actions[bot] commented 2 months ago

Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.