Open Jiushanhuadao opened 1 year ago
As required in the example https://github.com/salesforce/LAVIS/tree/main/projects/instructblip: "Please first follow the instructions to prepare Vicuna v1.1 weights." You will need to download the v1.1 model from lmsys/vicuna-7b-v1.1
instead of the latest v1.5
Can we use lmsys/vicuna-7b-v1.3?
I only have a 16GB graphics card, so I used the CPU to run it,My code is like:
import torch from PIL import Image from lavis.models import load_model_and_preprocess
device = "cpu" raw_image = Image.open("./img/art.png").convert("RGB") raw_image.resize((512, 512)) model, visprocessors, = load_model_and_preprocess(name="blip2_vicuna_instruct", model_type="vicuna7b", is_eval=True, device=device) image = vis_processors"eval".unsqueeze(0).to(device) res = model.generate({"image": image, "prompt":"Describe the image."}, use_nucleus_sampling=True, top_p=0.9, temperature=1, num_beams=1, max_length=30, min_length=1) print(res)
I copied the git from your code repository and modified this file:LAVIS/lavis/configurations/models/blip2/blip2 Instrument Vicuna7b.yaml I modified llm_model, changed to the model path downloaded from "./llm/vicuna-7b" to my own path(from huggingface) from"https://huggingface.co/lmsys/vicuna-7b-v1.5/tree/main" download from: "https://storage.googleapis.com/sfr-vision-language-research/LAVIS/models/InstructBLIP/instruct_blip_vicuna7b_trimmed.pth" as pretrained.
I got a output as ['OOOOOOOOOOOOOOOOPAOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO'],
I think I download a wrong model or my path is wrong,