vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
29.14k stars 4.35k forks source link

[Bug]: Issue Running LLaVA with vLLM Due to Tensor Size Mismatch #4421

Closed OualidBougzime closed 4 months ago

OualidBougzime commented 6 months ago

Your current environment

The output of `python collect_env.py`

🐛 Describe the bug

I'm attempting to integrate LLaVA with vLLM for image processing, but I'm encountering a tensor size mismatch error when executing my script.

Setup: I installed vLLM along with other required packages using the following command: !pip install vllm==0.4.1 kaleido python-multipart torch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1

Code: Here's the script I used to run LLaVA:

import torch
from vllm import LLM
from vllm.sequence import MultiModalData

def run_llava_pixel_values_debug():

    llm = LLM(
        model="llava-hf/llava-1.5-7b-hf",
        enforce_eager=True,
        tensor_parallel_size=1,
        image_input_type="pixel_values",
        image_token_id=32000,
        image_input_shape=f"1,3,224,224",
        image_feature_size=576,
    )

    prompt = "<image>" * 576 + (
        "\nUSER: What is the content of this image?\nASSISTANT:")

    # Load a smaller or test image file if available, or adjust the existing one to match the test size
    with open("3d-background-with-hexagonal-shapes-texture_23-2150473185.jpg", "rb") as f:
        image_file = f.read()

    outputs = llm.generate(prompt,
                           multi_modal_data=MultiModalData(
                               type=MultiModalData.Type.IMAGE, data=encoded))

    for o in outputs:
        generated_text = o.outputs[0].text
        print(generated_text)

run_llava_pixel_values_debug()

Error: Upon running this script, I receive the following error: RuntimeError: The size of tensor a (257) must match the size of tensor b (577) at non-singleton dimension 1.

Could anyone assist in identifying the source of this issue and suggest how I might correct the tensor size mismatch? Any help or suggestions would be greatly appreciated.

DarkLight1337 commented 6 months ago

The image should have size 1,3,336,336.

OualidBougzime commented 6 months ago

The image should have size 1,3,336,336.

I have the same error even if I change the size.

DarkLight1337 commented 6 months ago

The image should have size 1,3,336,336.

I have the same error even if I change the size.

You should ensure that the image that is actually inputted into the model also has this size (i.e. not only change the config). In the future, we will add image preprocessing to vLLM to make this step no longer necessary.

OualidBougzime commented 6 months ago

The image should have size 1,3,336,336.

I have the same error even if I change the size.

You should ensure that the image that is actually inputted into the model also has this size (i.e. not only change the config).

Yes, the images have the same size but I get the same error.

DarkLight1337 commented 6 months ago

Yes, the images have the same size but I get the same error.

To better pinpoint the issue, can you show the stack trace of the error?

OualidBougzime commented 6 months ago

Yes, the images have the same size but I get the same error.

To better pinpoint the issue, can you show the stack trace of the error?

It's working now, I'm using this code, which can process any image format:

import torch
from vllm import LLM
from vllm.sequence import MultiModalData
import torchvision.transforms as transforms
from PIL import Image
import io

def run_llava_pixel_values_debug():

    llm = LLM(
        model="llava-hf/llava-1.5-7b-hf",
        enforce_eager=True,
        tensor_parallel_size=1,
        image_input_type="pixel_values",
        image_token_id=32000,
        image_input_shape=f"1,3,336,336",
        image_feature_size=576,
    )

    prompt = "<image>" * 576 + (
        "\nUSER: What is the content of this image?\nASSISTANT:")

    # Load a smaller or test image file if available, or adjust the existing one to match the test size
    with open("3d-background-with-hexagonal-shapes-texture.jpg", "rb") as f:
      image_file = f.read()

    # Convert bytes data to PIL Image
    image = Image.open(io.BytesIO(image_file))

    # Define a transformation to tensor
    transform = transforms.Compose([
        transforms.Resize((336, 336)),
        transforms.ToTensor(),
    ])

    tensor_image = transform(image).unsqueeze(0)  # Add batch dimension

    outputs = llm.generate(prompt,
                           multi_modal_data=MultiModalData(
                               type=MultiModalData.Type.IMAGE, data=tensor_image))

    for o in outputs:
        generated_text = o.outputs[0].text
        print(generated_text)

run_llava_pixel_values_debug()

But I have a question: has LLaVA 1.6 been processed by vLLM yet, or not?

DarkLight1337 commented 6 months ago

It's not supported yet. We are working on it though!

OualidBougzime commented 6 months ago

It's not supported yet. We are working on it though!

Thank you for the information! I have one other question: how can I specify the number of tokens to generate and the temperature with vLLM in this code?

DarkLight1337 commented 6 months ago

You can pass SamplingParams to LLM.generate.

DarkLight1337 commented 6 months ago

Btw, if you absolutely must use LLaVA-1.6, I have a fork in #4199 which adds experimental support for it.