-
### Motivation
How to output logits of prompt input, and it will be needed as some evaluation tasks.
### Related resources
_No response_
### Additional context
_No response_
-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [X] 2. The bug has not been fixed in the latest version.
- [X] 3. Please note that if the bug-related iss…
-
### Motivation
![image](https://github.com/InternLM/lmdeploy/assets/12838445/3a263f1d-6a60-438e-9251-641de35b7d63)
为啥我采用 训练后量化精度下降的点数比较多
另外,只能节省现存,吞吐提高不明显,在A100 测试的结果
### Related resources
_No re…
-
全新git获取的MindSearch和lagent仓库(之所以不直接安装是因为需要修改lagent部分代码),其余正常pip安装。
修改了terminal.py的15行模型为本地模型internlm2-chat-20b-4bit(使用lmdeploy对internlm2-chat-20b进行量化后的模型)
在运行mindsearch/terminal.py的时候,就出现报错,不影响最后出结果:…
-
Support finetuning LLaVA 1.6
-
运行xtuner train /root/autodl-tmp/ft/config/internlm2_chat_7b_qlora_alpaca_e3_copy.py --work-dir /root/autodl-tmp/ft/train时
`[2024-05-30 17:18:47,089] [INFO] [real_accelerator.py:203:get_accelerator] S…
-
您好,我在使用这个项目进行测评时,由于推理速度太慢想切换到vllm推理,但是在使用过程中vllm的版本会和requirements中包的版本有些冲突会报错,请问您那边有尝试过切换vllm推理呢?如果有的话想请问一下您那边包的版本依赖版本是多少?
-
**Describe the bug**
What the bug is, and how to reproduce, better with screenshots(描述bug以及复现过程,最好有截图)
```
swift infer --model_type internvl2-8b-awq --infer_backend lmdeploy
```
```
WARNING:ro…
-
### 📚 The doc issue
InternalVL 1.5 handles multiple images, even if not trained for it, as authors say. But I can't see how lmdeploy handles that or not.
In other cases, models like cogvlm2 may n…
-
Hi, can you please provide a guide or support to use local llm models like Ollama lama3.1 8b or 70b