issues
search
THUDM
/
GLM-4
GLM-4 series: Open Multilingual Multimodal Chat LMs | 开源多语言多模态对话模型
Apache License 2.0
3.23k
stars
228
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
微调Loss为0
#261
Richard12868
opened
4 hours ago
0
loss为0
#260
Richard12868
closed
4 hours ago
1
文档解读时出现错误
#259
liangdaojun
opened
18 hours ago
1
2张Tesla P40(24G)能够正常运行glm-4-9b-chat-1m,但Tesla P40(24G)增加到3张时报错
#258
SH0AN
closed
15 hours ago
1
2张Tesla P40可以运行glm-4-9b-chat-1m,但增加到3张Tesla P40则报错
#257
SH0AN
opened
18 hours ago
1
多模态版本遇到 `model_kwargs` are not used by the model: ['images']
#256
ouungj
opened
1 day ago
1
使用vllm_cli_demo.py推理会报错
#255
huangmengasd
closed
15 hours ago
1
使用vllm时,输入完内容报错
#254
Dabouee
opened
1 day ago
1
请问能提供一下你们调整过的IFEval的中文版本用于评估比较吗?
#253
donggoing
opened
1 day ago
1
怎么在openai_api_server增加lora和glm-4-9b-chat一起启动
#252
WIIIIIf
closed
1 day ago
2
Maybe wrong in the code?
#251
Gary-code
closed
1 day ago
1
TypeError: unsupported operand type(s) for |: 'type' and '_GenericAlias'
#250
chuan-mt
opened
2 days ago
1
用单机双卡lora微调后生成的checkpoint-2500后启动报错,求解
#249
zycovoo
opened
2 days ago
0
lora 微调,eval的时候出现”'NoneType' object has no attribute 'to'“
#248
Text2-m
closed
2 days ago
2
你好,本地启动正常镜像启动报NCCL error
#247
ciaoyizhen
closed
2 days ago
4
对话模板中的System prompt 格式字样
#246
WEIC-7
closed
3 days ago
1
想问一下如果想把image tensor resize到224要在哪里改
#245
marybloodyzz
closed
2 days ago
1
2张4090显卡,用的双卡lora微调,报OutOfMemoryError: CUDA out of memory. Tried to allocate 214.00 MiB. GPU
#244
zycovoo
opened
3 days ago
9
运行官方的示例代码,乱输出
#243
Barbery
closed
2 days ago
11
修复composite_demo中的openai client 缺失system_prompt的问题
#242
wwewwt
closed
15 hours ago
2
glm4微调报错,conda环境已经安装了flash-attn依赖,错误信息还是让安装flash-attn依赖
#241
zycovoo
closed
3 days ago
1
测试报错
#240
leizhu1989
closed
2 days ago
3
BUG: 最新的glm4v代码使用transformers `AutoModelForCausalLM`类加载时强制要求安装`flash_attn`
#239
ChengjieLi28
closed
3 days ago
3
GLM4v - 9b最新的modelscope文件中有print
#238
ChengjieLi28
closed
3 days ago
1
更新最新的modeling_chatglm后Trainer报错
#237
fengyunflya
closed
2 days ago
9
get_tools() 返回tools的格式 还是glm3的格式, 不符合glm4的tools的格式
#236
programmermw1986
closed
3 days ago
0
Flash Attention 2 安装报错
#235
washgo
closed
4 days ago
1
V100推理问题
#234
ZZHbible
closed
3 days ago
11
更新完modeling_chatglm.py后出现bug
#233
chensongcan
closed
3 days ago
3
闲聊报错,没有图片的参数
#232
whk6688
closed
4 days ago
0
glm4可以进行增量预训练吗,能提供一下脚本吗
#231
gyh123wqe
closed
4 days ago
1
temperature<0.4时,大模型会卡住,gpu的利用率一直在95%以上,但是显存保持不变
#230
YanyuanAIMR
closed
3 days ago
4
单卡3090ti进行lora微调,遇到了OOM问题
#228
RyanCcc114
closed
2 days ago
15
提示词微调后,模型推理异常
#227
mumu029
closed
3 days ago
4
AttributeError
#226
ZHDTZ
closed
3 days ago
3
Question about word embedding
#224
Wonderland23
closed
4 days ago
2
brower 请求会导致显存泄漏,直到显存溢出报错
#222
wwewwt
closed
2 days ago
3
我无法使用ChatGLMForSequenceClassification进行分类
#220
Mr-Lnan
closed
1 week ago
2
模型许可证
#219
Pickpate
closed
5 days ago
4
Why GLM3 is better than GLM4 on LVEval benchmark?
#218
AnaRhisT94
closed
5 days ago
3
mmlu benchmark 无法复现根据当前代码
#217
chunniunai220ml
opened
1 week ago
12
怎么取消生成回答中带有表情
#216
kawayi12318
closed
1 week ago
0
请问glm-4v有做支持vllm的计划吗
#215
GanPeixin
closed
5 days ago
1
这个支持int8量化推理吗
#214
394988736
closed
1 week ago
1
update README&README_en
#213
SkyFlap
closed
1 week ago
1
Update openai_api_server.py
#212
mingyue0094
closed
4 days ago
1
> 为何我的M3根本就不能运行:ValueError: Can't infer missing attention mask on mps device. Please provide an attention_mask or use a different device. 该修改的都改了!!你的是怎么跑起来的呢??
#210
summer0216
closed
1 week ago
2
在华为的NPU的服务器上如何搭建?
#209
dayphosphor
closed
1 week ago
1
How to finetune GLM-4V-9B with LoRA?
#208
zengzwww
closed
1 week ago
5
Is there any plan to merge the `modeling_chatglm.py` into the Transformers library?
#207
iofu728
closed
2 days ago
3
Next