-
### System Info / 系統信息
Uncaught exception: Traceback (most recent call last): File "D:\Big_model\ChatGLM\GLM-4-main\composite_demo\src\main.py", line 288, in main for response, chat_history in client…
-
(chatglm) n:\github\GLM-4>python openai_api_lby.py
2024-06-12 15:24:16,061 - Start initialize model...
Special tokens have been added in the vocabulary, make sure the associated word embeddings are …
-
**提交 issue 前,请先确认:**
- [x] 我已看过 **FAQ**,此问题不在列表中
- [v] 我已看过其他 issue,他们不能解决我的问题
- [v] 我认为这不是 Mirai 或者 OpenAI 的 BUG
- [v] 我认为这个不是xposed中微X模块的问题
**表现**
描述 BUG 的表现情况
**运行环境:** windows-mirai…
-
I ran glm4 on MTL iGPU , and it reported this error:
![image](https://github.com/user-attachments/assets/1e0b6537-02da-4962-99aa-f6fd717a14ae)
oneAPI: l_BaseKit_p_2024.0.1.46_offline.sh
My env …
-
**Describe the bug**
when run line:model.quantize(examples) got AttributeError: 'NoneType' object has no attribute 'parameters'
**GPU Info**
Show output of:
```
+-------------------…
-
### System Info
platform == ubuntu 22.04
transformers == 4.32.2
python == 3.10.12
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified s…
-
when I run vLLM model like Qwen2-VL-2B with ARC770 on MTL platform, will report error message as below:
RuntimeError: Current platform can NOT allocate memory block with size larger than 4GB! Tried t…
-
### System Info / 系統信息
python 3.11.8
### Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?
- [ ] docker / docker
- [X] pip install / 通过 pip install 安装
- [ ] installation from source / 从源…
-
system: centos 7
cuda version: system=cuda-11.2, conda env cudatoolkit=11.6.0
python=3.8
按照requirement配置安装环境,
执行指令报错如下
```python
python finetune.py \
--dataset_path data/alpaca \
-…
hurun updated
2 weeks ago
-
train成功了,分享一下环境:
- 系统环境:
- python==3.11.9
- transformers==4.33.0
- pytorch==2.2.0
- flash-attn==2.6.3
- ninja==1.11.1.1
- deepspeed==0.15.0
- wandb==0.17.8
- /glm-4-9b 目录下的 …