Open SuperPat45 opened 2 months ago
Since yesterday vllm has internVL2 support. :-)
I guess that would work already with llama.cpp GGUF models if/when is getting supported in there ( see also https://github.com/ggerganov/llama.cpp/issues/9440 ).
I'd change the focus of this one to be more generic and add support for multimodal with vLLM, examples:
https://github.com/vllm-project/vllm/blob/main/examples/offline_inference_pixtral.py https://github.com/vllm-project/vllm/blob/main/examples/offline_inference_vision_language_multi_image.py
vllm already has llama 3.2 support https://github.com/vllm-project/vllm/pull/8811
Georgi wrote two weeks ago: "Not much has changes since the issue was created. We need contributions to improve the existing vision code and people to maintain it. There is interest to reintroduce full multimodal support, but there are other things with higher priority that are currently worked upon by the core maintainers of the project." (https://github.com/ggerganov/llama.cpp/issues/8010#issuecomment-2345831496)
BTW: "(Coming very soon) 11B and 90B Vision models
11B and 90B models support image reasoning use cases, such as document-level understanding including charts and graphs and captioning of images."
BTW: "(Coming very soon) 11B and 90B Vision models
11B and 90B models support image reasoning use cases, such as document-level understanding including charts and graphs and captioning of images."
that would be interesting to see given upstream(llama.cpp) is still working on it: https://github.com/ggerganov/llama.cpp/issues/9643
It seems they work independently on that https://github.com/ollama/ollama/pull/6963
It seems they work independently on that ollama/ollama#6963
that looks only golang-side of things to fit the images. The real backend changes seems to be in https://github.com/ollama/ollama/pull/6965
It seems they work independently on that ollama/ollama#6963
that looks only golang-side of things to fit the images. The real backend changes seems to be in ollama/ollama#6965
Oh, yes. Wrong link.
Add the new Multi-Modal model of mistral AI: pixtral-12b:
https://huggingface.co/mistral-community/pixtral-12b-240910