Closed xzqxnet0990 closed 1 year ago
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.
是不是第一步和第二步的微调中数据用的是一样的,只不过一个是按无监督,一个是按有监督来做的微调啊
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.
Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.
提交前必须检查以下项目
问题类型
模型训练与精调
基础模型
Alpaca-Plus-7B
操作系统
Linux
详细描述问题
Alpaca格式sft数据
问题一:我看到chinese-alpaca-lora中special_tokens_map的 "pad_token": "[PAD]", 所以使用[PAD]将output填充至2048长度, 这种方式训练后获得的推理长度会比较长,但是结果中会出现[PAD]。我不知道我的做法是否正确? 问题二:我先将上步骤中output内的数据整合,第一步是pre_training到模型中,然后再使用pre_training的结果,使用alpaca格式指令精调pre_training文件,两个步骤均训练了25000步,结果部分准确但是混杂了不准确的内容。 损失函数在pt过程中比较正常,但是在sft过程中梯度几乎消失。
依赖情况(代码类问题务必提供)
运行日志或截图