-
### What happened?
I fine-tuned the **InternLM2 7b-chat** model in **LLamaFactory** using a custom dataset and **lora**, exported the safetenors model and converted it to gguf format using `convert…
-
Why is there no --int8_kv_cache option when I want to use convert_checkpoint.py to build int8_kv_cache internlm2-chat-20b model?
convert_checkpoint.py is in /TensorRT-LLM/examples/internlm2/convert…
-
File "/root/.cache/huggingface/modules/transformers_modules/internlm2_5-1_8b/modeling_internlm2.py", line 174, in forward
if seq_len > self.max_seq_len_cached:
RuntimeError: Boolean value of Ten…
-
### Your current environment
vllm 0.5.0 pytroch 2.3.0
### 🐛 Describe the bug
python -m vllm.entrypoints.openai.api_server --model /data/resources/internlm2-chat-7b --served-model-name internlm2-c…
-
```
Exception in thread Thread-5:
Traceback (most recent call last):
File "/gpfsdata/home/zhangjingxin/anaconda3/envs/xinference11_0/lib/python3.9/threading.py", line 980, in _bootstrap_inner
…
-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [X] 2. The bug has not been fixed in the latest version.
- [ ] 3. Please note that if the bug-related issue y…
-
Thank you for this awesome work!
I have some questions regarding the data construction:
1. What is the difference between [`Lean-CoT-base`](https://huggingface.co/datasets/ScalableMath/Lean-CoT-ba…
-
### System Info / 系統信息
ubuntu20.04
### Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?
- [ ] docker / docker
- [X] pip install / 通过 pip install 安装
- [ ] installation from source / 从源码安…
-
我看到支持的基模有InternLM2,
最近InternLM已经发布了InternLM2.5-7b,请问是否支持呢?
-
```
Traceback (most recent call last):
File "/data/anaconda3/envs/torch/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/exec_code.py", line 88, in exec_func_with_error_handling
re…