-
chatGLM-6B is an open source model based on GLM fine tuned on over 1 trillion tokens of dialogue and RLHF for chat.
It's quickly becoming one of the most popular local models despite no good fast C…
-
我在x86上想要纯CPU跑chatglm-6B。
在执行python3 chatglm_cpp/convert.py -i THUDM/chatglm-6b -t q4_0 -o chatglm-ggml.bin 时
报了
RuntimeError: CUDA Runtime Error: no CUDA-capable device is detected
这个错,请问有解决方案吗?
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
```py
(base) root@ubuntu:/data/ChatGLM2-6B# python cli_demo.py
Compile p…
-
```python
>首先是对glm3和glm4模型做量化,我下载并使用glm-3-6b-chat和glm-4-9b-chat完整的模型做量化:(量化精度都是q8_0)
chatglm.cpp# python3 chatglm_cpp/convert.py -i /glm-3-6b-chat/ -t q8_0 -o models/chatglm3-q8_0-ggml.bin
chatglm.…
-
(face19) lan@lan:~/sdf/VisualGLM-6B$ python cli_demo_hf.py
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a …
-
**执行:**
`
python3 chatglm_cpp/convert.py -i modules/codegeex2-6b -t q4_0 -o codegeex-ggml.bin
`
**报错:**
`
Traceback (most recent call last):
File "chatglm_cpp/convert.py", line 543, in
…
-
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbyt…
-
chatglm-6b 最小训练显存多少? 如果显存不够如何部分加载 chatglm-6b 的权重
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
6/20/2023 19:56:11 - WARNING - transformers_modules.chatglm-6b.modeling_chatglm - `use_cache=…
-
在执行单元格:
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("../ChatGLM-6B/models/chatglm-6b, trust_remote_code=True)
抛出异常 No module named 'transformers_modules.'
当我…