ymcui / Chinese-LLaMA-Alpaca-2

中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)
Apache License 2.0
7k stars 570 forks source link

1.3B模型是如何训练的? #529

Closed makotov closed 3 months ago

makotov commented 4 months ago

提交前必须检查以下项目

问题类型

模型训练与精调

基础模型

Others

操作系统

None

详细描述问题

No response

依赖情况(代码类问题务必提供)

No response

运行日志或截图

No response

zxzjt commented 4 months ago

1.3B如何全量sft指令微调,有大佬知道嘛

GoGoJoestar commented 4 months ago

Chinese-LLaMA-2-1.3B是在Chinese-LLaMA-2-7B上取前四层,进行了增量预训练;Chinese-Alpaca-2-1.3B则是在Chinese-LLaMA-2-1.3B上进行了sft训练。 1.3B模型预训练和sft的训练数据与7/13B模型使用的数据相同。1.3B模型在训练时均采用全量参数训练的方式。

GoGoJoestar commented 4 months ago

1.3B如何全量sft指令微调,有大佬知道嘛

1.3B模型在结构上和7B相同,除了层数不一样。1.3B可直接使用原本的精调脚本进行sft,如需全量参数精调,可以在训练脚本中传入参数 “--full_finetuning True”

github-actions[bot] commented 3 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

Rick-24 commented 3 months ago

您好,您的来信我已收到,我会尽快处理。      祝好!

github-actions[bot] commented 3 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.