-
As the title says, on the same graphics card (3090), CodeGeeX2-6b is much slower than ChatGLM-6b. According to the official demo, I would like to know if there are any tricks.
-
- [x] I checked to make sure that this is not a duplicate issue
大佬好,清华发布了[CodeGeeX2-6b](https://github.com/THUDM/CodeGeeX2),有支持的计划吗
-
下载模型及代码后按照教程修改run_demo.py
def main():
parser = argparse.ArgumentParser()
parser = add_code_generation_args(parser)
args, _ = parser.parse_known_args()
# 原代码
# tokenizer, mo…
-
建议支持 https://github.com/THUDM/CodeGeeX2 刚刚发布,根据公布1@pass数据达到了35.9
-
Hi Team,
I meet the error during run rum_demo.py.
OS Environment:
Centos
python version:
Python 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] on linux
Type "help", "copyright", "credits…
-
**执行:**
`
python3 chatglm_cpp/convert.py -i modules/codegeex2-6b -t q4_0 -o codegeex-ggml.bin
`
**报错:**
`
Traceback (most recent call last):
File "chatglm_cpp/convert.py", line 543, in
…
-
您好,我使用了你们CodeGeex的vscode 插件,性能表现的真的是amazing! 很感谢你们能够share如此棒的工作!!
但是,我还有一个疑问,当我下载codegeex2-6b 尝试infer时,模型并不能跟从指令来完成一些任务。**方便告知使用模型权重来进行推理时,应该如何设置prompt吗?**
例子:要求模型实现代码注释的功能
```
import os
os.env…
-
系统:Windows11
python:3.9
备注:我使用的`管理员PowerShell`
完整报错如下:
```
(codegeex) PS E:\CodeGeex2> python demo/run_demo.py --quantize 4 --model-path ../codegeex2-6b --chatglm-cpp
fastllm disabled.
Using c…
atopx updated
4 months ago
-
执行:
import torch
from modelscope import AutoModel, AutoTokenizer
model_id = 'ZhipuAI/codegeex2-6b'
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoMode…
-
不管改成多少,都只输出很短一节内容,如图所示。
使用4卡部署,启动参数为:python run_demo.py --model-path "/home/dl/data/codegeex2-6b-model" --n-gpus 4
![图片](https://github.com/THUDM/CodeGeeX2/assets/11251894/e0a6181c-2e43-4031-b11…