-
现在新版本xtuner增加了dispatch后,不支持chatglm3-6b的微调了吗
File "/mnt/afs/xtuner/xtuner/model/sft.py", line 93, in __init__
dispatch_modules(self.llm, use_varlen_attn=use_varlen_attn)
File "/mnt/afs/xtuner/…
-
**Before:**
![e99fd6f8bd6b3cc802c39f03a0adad1](https://github.com/NEFUJing/LawyerLLM/assets/106534091/0a4f119c-b24c-4e8b-a19b-7694784a215a)
**After:**
![504b032158f719b58ddc0b978a906fc](https://git…
-
### I tested vllm benchmark_throughput.py and finded that the performance with chunked-prefill-enabled is lower than default, how can I deal this problem
_No response_
### Your current environ…
-
您好,我使用fastchat进行加载chatglm3-6b模型,
step1 `python3 -m fastchat.serve.controller`
step2 `python3 -m fastchat.serve.model_worker --model-path /ldata/llms/chatglm3-6b`
step3 `python3 -m fastchat.serve.op…
-
### 🚀 The feature, motivation and pitch
This project is very nice! Chatglm2-6b and chatglm3-6b work well in this project. But could you restore support for chatglm-6b? It's a very popular model.
###…
-
### System Info / 系統信息
torch==2.4.0
transformers==4.45.0
### Who can help? / 谁可以帮助到您?
_No response_
### Information / 问题信息
- [X] The official example scripts / 官方的示例脚本
- [ ] My own mo…
-
I’ve been using BIGDL-LM to accelerate the chatglm3-6b model. However, I’m curious about the speed. Is the current speed considered normal?
Here are the hardware details:
+ Graphics Card: Intel Corp…
-
### System Info
Traceback (most recent call last):
File "/home/powerop/.conda/envs/bamboo…
-
**问题描述 / Problem Description**
大家好,我使用Langchain-Chatchat webui並沒有發生自問自答的問題,但當我到swagger時就會出現自問自答的問題,在ollama中run EntropyYue/chatglm3:6b這個model時也一切正常,想請問各位有遇過一樣的問題嗎?是如何解決的呢?
![image](https://github.com…
-
### System Info
- CPU: X86
- GPU: NVIDIA L20
- python
- tensorrt 10.3.0
- tensorrt-cu12 10.3.0
- tensorrt-cu12-bindings 10.3.0
- tensorrt-cu12-libs 10…