shibing624 / MedicalGPT

MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training Pipeline. 训练医疗大模型,实现了包括增量预训练(PT)、有监督微调(SFT)、RLHF、DPO、ORPO。
Apache License 2.0
3.24k stars 492 forks source link

在SFT Lora 训练 ChatGLM2 的时候,在哪里设定 token 长度限制 #224

Closed Droliven closed 7 months ago

Droliven commented 1 year ago

Describe the Question

Please provide a clear and concise description of what the question is.

shibing624 commented 1 year ago

https://github.com/shibing624/MedicalGPT/blob/main/supervised_finetuning.py#L155

stale[bot] commented 9 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.(由于长期不活动,机器人自动关闭此问题,如果需要欢迎提问)