Closed menghonghan closed 6 months ago
您说的这个指令遵从能力,可以举一个具体的例子吗?
Directy training on long sequences can be harmful to model performances, as in training, dynamic_ntk and logn attention is effectively disabled; see line 833 of modeling_qwen.py. It means that the model can only handle a sequence length up to 2K or 8K at the start of the finetuning and adapting to a 32K sequence length will require a lot of data.
We advise you to take a look at Qwen1.5, which changes the strategy in extending context length and is more admissive to finetuning on long sequences.
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
当前行为 | Current Behavior
请问在chat上全量SFT max_seq_len开到32k相比2048 指令遵循能力下降非常严重 请问可能有什么原因呢 是数据不够长? 谢谢解答 谢谢
期望行为 | Expected Behavior
No response
复现方法 | Steps To Reproduce
No response
运行环境 | Environment
备注 | Anything else?
No response