vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
26.07k stars 3.82k forks source link

[Feature]: MultiModal LLM with vector API #6604

Closed qZhang88 closed 3 weeks ago

qZhang88 commented 1 month ago

🚀 The feature, motivation and pitch

Consider a scenario where a large model is deployed in the cloud, and the application is deployed on a computationally limited embedded device.

If we want to support multimodal dialogue interaction with vision and language, each request would send an image (considering the dialogue history, there would be many images). Given network bandwidth and other factors, this would cause a lot of latency.

Therefore, if the VLM's image encoder and projector are deployed on the embedded device, and if we could send the encoded vector instead during requests, the data transmission volume would be much smaller. This would reduce latency and improve the user experience.

Alternatives

The suggestted usage method is as follow

# Refer to the HuggingFace repo for the correct format to use
prompt = "USER: <vector>\nWhat is the content of this image?\nASSISTANT:"

# Image encoded vector
vector = np.array([x, x,x, x])

# Single prompt inference
outputs = llm.generate({
    "prompt": prompt,
    "multi_modal_data": {"vector": vector},
})

For this usage, deploying only a single-model LLM model could support multi-modal model usage, and the modality is not limited.

Additional context

No response

ywang96 commented 1 month ago

Hey @qZhang88 thanks for the issue! Supporting image embeddings as inputs is indeed in our Q3 roadmap that you can check in #4194