I am not sure if I used the correct base model, I also tried --model-base meta-llama/Llama-2-7b-chat-hf. But they lead to the same error:
Loading checkpoint shards: 100%|█████████████████| 2/2 [00:09<00:00, 4.79s/it]
Loading LoRA weights from /home/usr/LLaVA/checkpoints
Merging weights
Convert to FP16...
Traceback (most recent call last):
File "/home/usr/miniconda3/envs/llava/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/usr/miniconda3/envs/llava/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/usr/LLaVA/llava/serve/cli.py", line 124, in <module>
main(args)
File "/home/usr/LLaVA/llava/serve/cli.py", line 56, in main
image_tensor = process_images([image], image_processor, model.config)
File "/home/usr/LLaVA/llava/mm_utils.py", line 37, in process_images
return image_processor(images, return_tensors='pt')['pixel_values']
TypeError: 'NoneType' object is not callable
I'm using the latest version and have installed all packages as per the instructions.
I would appreciate your assistance in identifying potential causes for the error. Thanks.
Describe the issue
Hi haotian-liu,
I'm interested in LLaVA and attempted CLI Inference using the Lora checkpoint with this command:
I am not sure if I used the correct base model, I also tried
--model-base meta-llama/Llama-2-7b-chat-hf
. But they lead to the same error:I'm using the latest version and have installed all packages as per the instructions.
I would appreciate your assistance in identifying potential causes for the error. Thanks.