EmbeddedLLM / vllm-rocm

vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs
https://vllm.readthedocs.io
Apache License 2.0
83 stars 5 forks source link

Support image processing for VLMs and GPT-4V Chat Completions API #24

Closed DarkLight1337 closed 2 months ago

DarkLight1337 commented 2 months ago

Facilitate early testing of PR vllm-project/vllm#3978; refer to the linked page for more details.