llava-rlhf / LLaVA-RLHF

Aligning LMMs with Factually Augmented RLHF
https://llava-rlhf.github.io/
GNU General Public License v3.0
323 stars 25 forks source link

Model testing #26

Closed ernestoBocini closed 6 months ago

ernestoBocini commented 8 months ago

I'm trying to load the model using this code: `import torch from llava.model import LlavaLlamaForCausalLM from peft import PeftModel

from llava.model.builder import load_pretrained_model from llava.mm_utils import get_model_name_from_path from llava.eval.run_llava import eval_model

model_path = "LOCAL_PATH/LLaVA-RLHF/sft_model" lora_path = "LOCAL_PATH/LLaVA-RLHF/rlhf_lora_adapter_model" model_name = "LLaVA-RLHF-7b-v1.5-224"

disable_torch_init()

load_bf16 = True

tokenizer, model, image_processor, context_len = load_pretrained_model( model_path=model_path, model_base=None, model_name=model_name )

model = PeftModel.from_pretrained( model, lora_path, ) Which seems to work fine, with some warnings that shouldn't be problematic. However, if I try to test it using: prompt = "What are the things I should be cautious about when I visit here?" image_file = "https://llava-vl.github.io/static/images/view.jpg"

args = type('Args', (), { "model_path": model_path, "model_base": None, "model_name": model_name, # get_model_name_from_path(model_path), "query": prompt, "conv_mode": None, "image_file": image_file, "sep": ",", "temperature": 0, "top_p": None, "num_beams": 1, "max_new_tokens": 512 })()

eval_model(args) thetokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)` breaks. Could you kindly provide some demo for this? Thank you!

Edward-Sun commented 7 months ago

Could you please try a local file, for example, download the image of https://llava-vl.github.io/static/images/view.jpg and feed the absolute path of into the model?