FreedomIntelligence / HuatuoGPT-Vision

Medical Multimodal LLMs
248 stars 20 forks source link

Runtime Error: forward() got an unexpected keyword argument 'cache_position' #2

Open Meng-Fanshu opened 2 months ago

Meng-Fanshu commented 2 months ago

code: query = 'What does the picture show?' image_paths = ['/home/downloads/test.jpg']

huatuogpt_vision_model_path = "/home/llm_models/HuatuoGPT-Vision-7B"

from cli import HuatuoChatbot bot = HuatuoChatbot(huatuogpt_vision_model_path) output = bot.inference(query, image_paths) print(output) # Prints the output of the model

Error Info: /home/mengfanshu/workplace/HuatuoGPT-Vision loading from /home/llm_models/HuatuoGPT-Vision-7B loading from CLIP first. This should only be used at inference!!! loading vision model from openai/clip-vit-large-patch14-336 Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 4.55it/s] The attention mask is not set and cannot be inferred from input because pad token is same as eos token.As a consequence, you may observe unexpected behavior. Please pass your input's attention_mask to obtain reliable results. Traceback (most recent call last): File "/home/mengfanshu/workplace/HuatuoGPT-Vision/quickStart.py", line 8, in output = bot.inference(query, image_paths) File "/home/mengfanshu/workplace/HuatuoGPT-Vision/cli.py", line 276, in inference output_ids = self.model.generate( File "/home/mengfanshu/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, kwargs) File "/home/mengfanshu/workplace/HuatuoGPT-Vision/llava/model/language_model/llava_qwen2.py", line 150, in generate return super().generate( File "/home/mengfanshu/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(args, kwargs) File "/home/mengfanshu/miniconda3/envs/pytorch/lib/python3.9/site-packages/transformers/generation/utils.py", line 1989, in generate result = self._sample( File "/home/mengfanshu/miniconda3/envs/pytorch/lib/python3.9/site-packages/transformers/generation/utils.py", line 2932, in _sample outputs = self(model_inputs, return_dict=True) File "/home/mengfanshu/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/home/mengfanshu/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) TypeError: forward() got an unexpected keyword argument 'cache_position'

HuichiZhou commented 2 months ago

Hello, you should pip install transformer==4.37.2

SiddhantOjha17 commented 2 months ago

how did you download the model ?

mrjzz commented 2 weeks ago

how did you download the model ?

Is the way of model-downloading relevant with this error?

mrjzz commented 2 weeks ago

Hello, you should pip install transformer==4.37.2 Thx, it works.