Open Meng-Fanshu opened 2 months ago
Hello, you should pip install transformer==4.37.2
how did you download the model ?
how did you download the model ?
Is the way of model-downloading relevant with this error?
Hello, you should
pip install transformer==4.37.2
Thx, it works.
code: query = 'What does the picture show?' image_paths = ['/home/downloads/test.jpg']
huatuogpt_vision_model_path = "/home/llm_models/HuatuoGPT-Vision-7B"
from cli import HuatuoChatbot bot = HuatuoChatbot(huatuogpt_vision_model_path) output = bot.inference(query, image_paths) print(output) # Prints the output of the model
Error Info: /home/mengfanshu/workplace/HuatuoGPT-Vision loading from /home/llm_models/HuatuoGPT-Vision-7B loading from CLIP first. This should only be used at inference!!! loading vision model from openai/clip-vit-large-patch14-336 Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 4.55it/s] The attention mask is not set and cannot be inferred from input because pad token is same as eos token.As a consequence, you may observe unexpected behavior. Please pass your input's
output = bot.inference(query, image_paths)
File "/home/mengfanshu/workplace/HuatuoGPT-Vision/cli.py", line 276, in inference
output_ids = self.model.generate(
File "/home/mengfanshu/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, kwargs)
File "/home/mengfanshu/workplace/HuatuoGPT-Vision/llava/model/language_model/llava_qwen2.py", line 150, in generate
return super().generate(
File "/home/mengfanshu/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(args, kwargs)
File "/home/mengfanshu/miniconda3/envs/pytorch/lib/python3.9/site-packages/transformers/generation/utils.py", line 1989, in generate
result = self._sample(
File "/home/mengfanshu/miniconda3/envs/pytorch/lib/python3.9/site-packages/transformers/generation/utils.py", line 2932, in _sample
outputs = self(model_inputs, return_dict=True)
File "/home/mengfanshu/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(args, kwargs)
File "/home/mengfanshu/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
TypeError: forward() got an unexpected keyword argument 'cache_position'
attention_mask
to obtain reliable results. Traceback (most recent call last): File "/home/mengfanshu/workplace/HuatuoGPT-Vision/quickStart.py", line 8, in