issues
search
THUDM
/
ChatGLM3
ChatGLM3 series: Open Bilingual Chat LLMs | 开源双语对话语言模型
Apache License 2.0
13.44k
stars
1.56k
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
是否可以商用?
#1104
239573049
closed
6 months ago
1
LangChain流式输出
#1103
wenxinmomo
closed
6 months ago
2
embeddings 接口支持text
#1102
st01cs
closed
6 months ago
0
Update README.md
#1100
JianxinDong
closed
6 months ago
1
Lora的layer命名问题导致lora weights无法被tensorRT_LLM的hf_lora_convert.py处理
#1098
wangqy1216
closed
6 months ago
1
量化(int4)按新的方式配置了显示报错
#1095
qinzhenyi1314
closed
7 months ago
3
PPO Question
#1090
ShelbyHero
closed
6 months ago
1
fix bug of quantize
#1088
zRzRzRzRzRzRzR
closed
7 months ago
0
chatglm3-6b-128k需要多大的GPU才能跑起来啊?
#1086
zhidaoai
closed
6 months ago
2
修复一些微调bug#1079
#1083
zRzRzRzRzRzRzR
closed
7 months ago
0
请问chatglm中tokenizer(question)的结果是question+[gMASK]+<sop>,但是如果我自己进行tokenizer,设定为[gMASK]+<sop>+question,两种方式是否都可以。
#1082
shenhao-stu
closed
5 months ago
6
新版微调代码测试无响应
#1079
yutong12
closed
7 months ago
2
API模式部署chatglm3后,多用户同时调用api_server.py文件中的接口,流式回复,会出现报错
#1075
RyanOvO
closed
7 months ago
1
运行streamlit run main.py提示ModuleNotFoundError: No module named 'huggingface_hub.inference._text_generation'
#1074
zhaoqi9
closed
6 months ago
5
拓展词表
#1072
shilida
closed
7 months ago
1
如何解除当给模型传入用户隐私信息时,模型拒绝回答的问题。
#1071
wufxgtihub123
closed
7 months ago
1
cli_demo.py运行错误
#1070
anyongli
closed
7 months ago
1
尝试解决tokenizer报错问题
#1069
zRzRzRzRzRzRzR
closed
7 months ago
1
使用inference_hf.py时报错NotImplementedError
#1067
royalpotato-maker
closed
6 months ago
18
由last hidden state 输出logits
#1066
SXxinxiaosong
closed
6 months ago
2
微调时提示模型路径为None
#1065
pencui
closed
7 months ago
1
无法安装微调所需要的依赖
#1064
pencui
closed
7 months ago
2
使用推理脚本inference_hf时报错:AttributeError: can't set attribute 'eos_token'如何解决
#1063
MingjunHu
closed
7 months ago
10
.quantize(4)好像压缩不了模型
#1059
DeepAichemist
closed
7 months ago
2
梯度检查点无法启动的问题
#1057
xiaokening
closed
7 months ago
2
AssertionError: The weights that need to be quantified should be on the CUDA device
#1056
SteveYung-tech
closed
7 months ago
1
学习率配置更新
#1055
zRzRzRzRzRzRzR
closed
7 months ago
0
load_model_on_gpus 报错
#1054
hejianle
closed
7 months ago
1
composite_demo文件夹中client.py量化(int4)配置了但是显存没有减少还是12710MiB
#1052
qinzhenyi1314
closed
7 months ago
5
微调后进行模型部署,网页版对话框能打开,进行对话无输出。
#1050
yimisiyang
closed
6 months ago
3
fix:docs
#1048
sinajia
closed
7 months ago
1
chaGLM3-6B模型的任务类型
#1046
12915494174
closed
7 months ago
2
请问 chatglm3的history是预先定义好吗?如果可以的话,应该怎么定义,格式是什么?
#1043
Zhangyiming0039
closed
7 months ago
1
问下vllm部署启动成功后支持api访问模式吗?为什么访问接口报404 INFO: 172.26.0.2:34438 - "POST /v1/chat/completions HTTP/1.1" 404 Not Found
#1040
Andy1018
closed
7 months ago
1
feat: add bigdl-llm demo for intel device
#1039
ai-liuys
closed
7 months ago
6
macos上运行综合demo报huggingface-hub的错
#1037
Crispinli
closed
7 months ago
4
function call的疑问
#1036
Smile-L-up
closed
7 months ago
0
【关于模型推理加速】如何获取模型文件
#1033
zzisbeauty
closed
7 months ago
6
本地有下载好模型,却报这样的错:Could not locate the tokenization_chatglm.py
#1031
zhangyue2709
closed
7 months ago
4
AdvertiseGen_fix下的数据集能微调,自己制作的数据集微调出错
#1030
jackyysu
closed
7 months ago
13
finetune_demo的readme.md文件,这一版好像删掉了“输入输出格式” 微调样例?
#1028
LuWanTong
closed
7 months ago
1
我Win10系统安装设置完成之后启动GBK报错
#1027
dugutianxue
closed
7 months ago
2
官方lora微调套件将lora更换为ptuningv2文件后微调进行预测时出现报错
#1022
tiamojames
closed
7 months ago
7
N卡的TensorRT-LLM无法使用chatglm3_6b_base构建推理引擎
#1020
cyx2000
closed
7 months ago
3
解决langchain_demo一轮对话中无法支持多轮tool_call调用的问题; 以及优化参数定义
#1016
gglfirefly
closed
7 months ago
3
微调依赖问题测试 peft 0.10.0
#1013
zRzRzRzRzRzRzR
closed
7 months ago
0
使用inference进行导入出现RuntimeError: CUDA error: device-side assert triggered Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
#1012
zcs007
closed
7 months ago
10
Fix default value for training_args in FinetuningConfig
#1008
crackerben99
closed
7 months ago
0
在哪里对微调方法能进行修改呢?我们自己可否添加微调方法呢?
#1006
Duperr
closed
7 months ago
1
多机微调batchsize不能为1
#1004
TaChao
closed
7 months ago
3
Previous
Next