llava-rlhf / LLaVA-RLHF

Aligning LMMs with Factually Augmented RLHF
https://llava-rlhf.github.io/
GNU General Public License v3.0
315 stars 21 forks source link

how to use the model for testing #2

Closed LiqiangJing closed 1 year ago

LiqiangJing commented 1 year ago

load_bf16 = True

tokenizer, model, image_processor, context_len = load_pretrained_model( model_path, None, model_name, load_bf16=load_bf16)

model = PeftModel.from_pretrained( model, lora_path, )

I can load the model using the above code, but I don't know how to input my data [image, text] into the model and get the result. Could you help me?

sIncerass commented 1 year ago

Hi @LiqiangJing, thanks for your interest in our work. After loading the model, you can refer to model_vqa.py in the LLaVA repo to gather the customized output.

Please notice that you may need to append \nAnswer the question using a single word or phrase. prompt for Yes/No or Multiple-Choice QA questions.