shibing624 / MedicalGPT

MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training Pipeline. 训练医疗大模型,实现了包括增量预训练(PT)、有监督微调(SFT)、RLHF、DPO、ORPO。
Apache License 2.0
2.94k stars 452 forks source link

这个错误是因为训练的单条数据太长了吗?截断是不是修改配置就可以了? #328

Closed zxx20231119 closed 5 months ago

zxx20231119 commented 5 months ago

是不是输入的单条训练数据过长导致的

Token indices sequence length is longer than the specified maximum sequence length for this model (12598 > 8192). Running this sequence through the model will result in indexing errors

zxx20231119 commented 5 months ago

用的测试模型是qwen1_8B的,增量预训练测试

shibing624 commented 5 months ago

设置block size,warning可以忽略。

zxx20231119 commented 5 months ago

感谢