lyhue1991 / torchkeras

Pytorch❤️ Keras 😋😋
Apache License 2.0
1.5k stars 197 forks source link

注册chatglm魔法命令出错 #51

Closed Melchoirr closed 11 months ago

Melchoirr commented 11 months ago
# 通过注册jupyter魔法命令可以很方便地在jupyter中测试ChatGLM 
from torchkeras.chat import ChatGLM 
chatglm = ChatGLM(model, tokenizer)
register magic %%chatglm sucessed ...
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[5], line 3
      1 # 通过注册jupyter魔法命令可以很方便地在jupyter中测试ChatGLM 
      2 from torchkeras.chat import ChatGLM 
----> 3 chatglm = ChatGLM(model, tokenizer)

File ~/anaconda3/envs/zdw/lib/python3.10/site-packages/torchkeras/chat/chatglm.py:27, in ChatGLM.__init__(self, model, tokenizer, stream, max_chat_rounds, history, max_length, num_beams, do_sample, top_p, temperature, logits_processor)
     24     print('register magic %%chatglm failed ...')
     25     print(err)
---> 27 response = self('你好')
     28 if not self.stream:
     29     print(response)

File ~/anaconda3/envs/zdw/lib/python3.10/site-packages/torchkeras/chat/chatglm.py:50, in ChatGLM.__call__(self, query)
     43     return response 
     45 result = self.model.stream_chat(self.tokenizer,
     46     query,self.history,None,self.max_length,
     47     self.do_sample,self.top_p,self.temperature,
     48     self.logits_processor,None)
---> 50 for response,history in result:
     51     print(response)
     52     clear_output(wait=True)

File ~/anaconda3/envs/zdw/lib/python3.10/site-packages/torch/utils/_contextlib.py:26, in _wrap_generator.<locals>.generator_context(*args, **kwargs)
     24 @functools.wraps(func)
     25 def generator_context(*args, **kwargs):
---> 26     gen = func(*args, **kwargs)
     28     # Generators are suspended and unsuspended at `yield`, hence we
     29     # make sure the grad mode is properly set every time the execution
     30     # flow returns into the wrapped generator and restored when it
     31     # returns through our `yield` to our caller (see PR #49017).
     32     try:
     33         # Issuing `None` to a generator fires it up

TypeError: ChatGLMForConditionalGeneration.stream_chat() takes from 3 to 9 positional arguments but 11 were given

请问是哪里出问题了啊

Melchoirr commented 11 months ago

原来是导入成chatglm模型了

Melchoirr commented 11 months ago

打扰了(