UX-Decoder / LLaVA-Grounding

Apache License 2.0
301 stars 11 forks source link

Where is the inference code? #7

Closed Ryosuke0104 closed 5 months ago

Ryosuke0104 commented 6 months ago

Thank you for sharing your amazing work and codes.

I want to try your model for my research, and i found your demo at the gradio. However i want to try the model, like this; (this is from original LLaVA repository https://github.com/haotian-liu/LLaVA#:~:text=Quick%20Start%20With%20HuggingFace. )

from llava.model.builder import load_pretrained_model
from llava.mm_utils import get_model_name_from_path
from llava.eval.run_llava import eval_model

model_path = "liuhaotian/llava-v1.5-7b"

tokenizer, model, image_processor, context_len = load_pretrained_model(
    model_path=model_path,
    model_base=None,
    model_name=get_model_name_from_path(model_path)
)

model_path = "liuhaotian/llava-v1.5-7b"
prompt = "What are the things I should be cautious about when I visit here?"
image_file = "https://llava-vl.github.io/static/images/view.jpg"

args = type('Args', (), {
    "model_path": model_path,
    "model_base": None,
    "model_name": get_model_name_from_path(model_path),
    "query": prompt,
    "conv_mode": None,
    "image_file": image_file,
    "sep": ",",
    "temperature": 0,
    "top_p": None,
    "num_beams": 1,
    "max_new_tokens": 512
})()

eval_model(args)

Is ther any code for Quick Start of Inference?

Thank you in advance.

5k5000 commented 6 months ago

the same request, plz

HaoZhang534 commented 5 months ago

Can you use demo code for inference?