QwenLM / Qwen

The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Apache License 2.0
13.75k stars 1.12k forks source link

72B模型是预训练阶段就完全用32k窗口的吗? #709

Closed HaoshengZou closed 9 months ago

HaoshengZou commented 10 months ago

config.json里的seq_length是否可以完全代表预训练时的窗口长度?72B从头开始用32k窗口训的吗?

liudayiheng commented 10 months ago

8k训练,可以外推32k

HaoshengZou commented 10 months ago

感谢回复!

boxiaowave commented 10 months ago

8k训练,可以外推32k

请问8k训练的时候base就是1000000吗?

TissueC commented 10 months ago

8k训练,可以外推32k

请问8k训练的时候base就是1000000吗?

Hope this issue could be reopen.

HaoshengZou commented 10 months ago

8k训练,可以外推32k

请问8k训练的时候base就是1000000吗?

Hope this issue could be reopen.

按我的理解应该是,一直用的同一个base。预训练不同阶段改base没有任何必要,反而会让loss突然崩一下。期待官方回复后关闭issue。感谢! @liudayiheng

boxiaowave commented 10 months ago

其实比较想知道qwen 72B的外推训练方案是否和codellama类似,短文本先长时间pretrain,再修改base到100000进行8k长文本的continue pretrain?

jklj077 commented 9 months ago

Not able to disclose related information. Thank you for your understanding.