ymcui / Chinese-LLaMA-Alpaca

中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki
Apache License 2.0
17.98k stars 1.84k forks source link

lora 模型合并的几个问题 #890

Closed wangyao123456a closed 2 months ago

wangyao123456a commented 3 months ago

提交前必须检查以下项目

问题类型

模型转换和合并

基础模型

None

操作系统

None

详细描述问题

1、请问base model 和lora model 合并完后,新模型的大小、参数量和base model 一样的吗? 2、推理时如果lora 和base 模型不合并,输出结果和模型合并后推理结果是否一致,有精度上的损失吗?

依赖情况(代码类问题务必提供)

No response

运行日志或截图

No response

github-actions[bot] commented 2 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

github-actions[bot] commented 2 months ago

Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.