QwenLM / Qwen

The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Apache License 2.0
13.59k stars 1.11k forks source link

在chat上SFT max_seq_len开大后指令遵循能力下降很严重 #1042

Closed menghonghan closed 6 months ago

menghonghan commented 8 months ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

请问在chat上全量SFT max_seq_len开到32k相比2048 指令遵循能力下降非常严重 请问可能有什么原因呢 是数据不够长? 谢谢解答 谢谢

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

WangJianQ-cmd commented 7 months ago

您说的这个指令遵从能力,可以举一个具体的例子吗?

jklj077 commented 6 months ago

Directy training on long sequences can be harmful to model performances, as in training, dynamic_ntk and logn attention is effectively disabled; see line 833 of modeling_qwen.py. It means that the model can only handle a sequence length up to 2K or 8K at the start of the finetuning and adapting to a 32K sequence length will require a lot of data.

We advise you to take a look at Qwen1.5, which changes the strategy in extending context length and is more admissive to finetuning on long sequences.