Is your feature request related to a problem? Please describe.
Currently I am using Qwen2vl, this is the best vlm model for my project. I hope llama-cpp-python can support this model. I tried to use llama.cpp to build a server, but llama.cpp is not allow to use mm-proj.
Describe the solution you'd like
Support the qwen2vl model and use it like other vlm models.
Is your feature request related to a problem? Please describe. Currently I am using Qwen2vl, this is the best vlm model for my project. I hope llama-cpp-python can support this model. I tried to use llama.cpp to build a server, but llama.cpp is not allow to use mm-proj.
Describe the solution you'd like Support the qwen2vl model and use it like other vlm models.