vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
23.02k stars 3.26k forks source link

[Model]: Adding support for MiniCPM-Llama3-V-2_5 #5808

Open ssuncheol opened 3 weeks ago

ssuncheol commented 3 weeks ago

Please support for MiniCPM-Llama3-V-2_5.

vipulgote1999 commented 2 weeks ago

+1

zylo117 commented 13 hours ago

+1