THUDM / VisualGLM-6B

Chinese and English multimodal conversational language model | 多模态中英双语对话语言模型
Apache License 2.0
4.08k stars 415 forks source link

TypeError: chat() got multiple values for argument 'history' #160

Open wangjingyu001 opened 1 year ago

wangjingyu001 commented 1 year ago

from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("THUDM/visualglm-6b", trust_remote_code=True)

model = AutoModel.from_pretrained("THUDM/visualglm-6b", trust_remote_code=True).half().cuda()

按需修改,目前只支持 4/8 bit 量化

model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).quantize(8).half().cuda()

INT8 量化的模型将"THUDM/chatglm-6b-int4"改为"THUDM/chatglm-6b-int8"

model = AutoModel.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True).half().cuda()

image_path = "./examples/1.jpeg" response, history = model.chat(tokenizer, image_path, "描述这张图片。", history=[]) print(response) response, history = model.chat(tokenizer, image_path, "这张图片可能是在什么场所拍摄的?", history=history) print(response)

运行这个代码报错了,请问是什么原因

ENjoy924 commented 1 year ago

你好,请问这个问题你解决了吗

1049451037 commented 1 year ago

没看明白,为什么加载chatglm的模型来跑visualglm

buptsdz commented 11 months ago

我也是报错这个

JiangNingRicky commented 5 months ago

没看明白,为什么加载chatglm的模型来跑visualglm

正解,因为他们用的都是mac的电脑在本地跑,然后参考了 https://github.com/THUDM/ChatGLM-6B/issues/6 这个问题的解决方案