-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
日志中显示成功保存参数pytorch_model.bin:
![image](https://github.com/THUDM/ChatGLM-6B/assets/47592644/d…
-
执行convert.py,直接就退出了,不能成功转换。
(chatglmcpp) F:\llmbak\chatglm.cpp-main>python chatglm_cpp/convert.py -i F:\llmbak\chatglm2\chatglm2-6b -t q8_0 -o F:\llmbak\chatglm2\chatglm2-6b-q8-0-ggml.bin
Loading ch…
-
### 🚀 The feature, motivation and pitch
This project is very nice! Chatglm2-6b and chatglm3-6b work well in this project. But could you restore support for chatglm-6b? It's a very popular model.
###…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
python test.py
Explicitly passing a `revision` is encouraged when loading a model with…
-
求问ChatGLM2-6B,我用数据集微调后,使用adapter推理成功了,但是merge之后使用官方cli_demo会直接Loading checkpoint shards: 0%| Killed,看了一下fp32合并后的模型有23.2G,换成fp16后的模型为11.6G,但是同样会出现killed问题。
训练配置:
```
{
"output_dir": "saved_fi…
-
使用命令cmake -B build,出现错误:
CMake Error at CMakeLists.txt:19 (add_subdirectory):
The source directory
/root/autodl-tmp/chatglm.cpp/third_party/ggml
does not contain a CMakeLists.txt file.…
-
python代码直接加载模型调用
![image](https://github.com/ztxz16/fastllm/assets/35361034/e1ffbd23-2f2a-4f5d-8f82-e75c391feb36)
Explicitly passing a `revision` is encouraged when loading a model with custom cod…
-
感觉ChatGLM3-6B模型转换后回答质量没有ChatGLM2-6B的效果好,经常出现回复中英文混搭、循环输出直至达到最大长度等问题。
之前ChatGLM2-6B-32K的模型转换后这个问题很明显,但ChatGLM2-6B基本上没这种问题。
现在ChatGLM3-6B的模型不论是原始8K的模型还是32K的模型,这个问题都很突出,能否优化一下啊?
-
**Describe the bug/ 问题描述 (Mandatory / 必填)**
我在用mindnlp.peft的LoRA微调ChatGLM3-6b时,训练过程中在lora的linear层报错TypeError。
- **Hardware Environment(`Ascend`/`GPU`/`CPU`) / 硬件环境**:
GPU
- **Software Environ…
-
Hi,
When I try to run "FinGPT_Training_LoRA_with_ChatGLM2_6B_for_Beginners.ipynb" in google colab, I came aross a problem.
The code is
model_name = "THUDM/chatglm2-6b"
tokenizer = AutoToke…