ymcui / Chinese-LLaMA-Alpaca

中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki
Apache License 2.0
18.23k stars 1.86k forks source link

微调模型数据填充问题 #768

Closed xzqxnet0990 closed 1 year ago

xzqxnet0990 commented 1 year ago

提交前必须检查以下项目

问题类型

模型训练与精调

基础模型

Alpaca-Plus-7B

操作系统

Linux

详细描述问题

Alpaca格式sft数据

"instruction": "自定义算子类型",
"input": "",
"output": "自定义算子类型 用户可以自定义的算子类型如下: 
|算子类型 |描述|示例|计算方式| |:--:|:--:|:--:|:--:| |流式命令|对搜索返回的事件数据逐个应用SPL命令进行处理|eval,fields,rex,rename,replace,where,search
|集中式/分布式| |转换统计命令|将搜索返回的事件转换为用于统计目的的数值|chart,stats,rare,timechart,top|集中式/分布式| |搜索导出|将搜索分析结果导出到第三方平台|export
|集中式/分布式|[PAD][PAD][PAD][PAD][PAD][PAD]

问题一:我看到chinese-alpaca-lora中special_tokens_map的 "pad_token": "[PAD]", 所以使用[PAD]将output填充至2048长度, 这种方式训练后获得的推理长度会比较长,但是结果中会出现[PAD]。我不知道我的做法是否正确? image 问题二:我先将上步骤中output内的数据整合,第一步是pre_training到模型中,然后再使用pre_training的结果,使用alpaca格式指令精调pre_training文件,两个步骤均训练了25000步,结果部分准确但是混杂了不准确的内容。 损失函数在pt过程中比较正常,但是在sft过程中梯度几乎消失。

image

依赖情况(代码类问题务必提供)

peft                          0.3.0.dev0
torch                         2.0.1
transformers                  4.30.2
transformers-stream-generator 0.0.4

运行日志或截图

# 请在此处粘贴运行日志

image

github-actions[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

bigcash commented 1 year ago

是不是第一步和第二步的微调中数据用的是一样的,只不过一个是按无监督,一个是按有监督来做的微调啊

github-actions[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

github-actions[bot] commented 1 year ago

Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.