THUDM / ChatGLM2-6B

ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
Other
15.71k stars 1.85k forks source link

[Help] lora微调合并后模型推理速度明显慢了好多 #583

Open daydayup-zyn opened 1 year ago

daydayup-zyn commented 1 year ago

Is there an existing issue for this?

Current Behavior

只用lora微调了20条数据集,合并模型后,推理速度相比原来的基座模型慢了好多,这个是什么原因?跟微调过程中的参数有关系吗?

Expected Behavior

No response

Steps To Reproduce

Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :

Anything else?

No response

ExtremelyDarkSun commented 8 months ago

不知道您是否解决该问题。我们在使用lora处理该模型时遇到了同样的问题...具体的问题是大约前一半bs推理非常慢,后面一半又非常快。同时观察到模型的输出大部分为空...

fywu commented 2 months ago

请问是否解决该问题了,怎么解决的?