shibing624 / MedicalGPT

MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training Pipeline. 训练医疗大模型,实现了包括增量预训练(PT)、有监督微调(SFT)、RLHF、DPO、ORPO。
Apache License 2.0
2.94k stars 452 forks source link

全量微调,没有用lora,之后的参数怎么合并到原模型中 #324

Closed WangZY1111 closed 4 months ago

WangZY1111 commented 5 months ago

Describe the bug

Please provide a clear and concise description of what the bug is. If applicable, add screenshots to help explain your problem, especially for visualization related problems.

shibing624 commented 5 months ago

不需要合并,全参微调的模型就是大模型,跟正常训练模型一样。