mudler / LocalAI

:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference
https://localai.io
MIT License
25.48k stars 1.92k forks source link

Add the new Multi-Modal model of mistral AI: pixtral-12b #3535

Open SuperPat45 opened 2 months ago

SuperPat45 commented 2 months ago

Add the new Multi-Modal model of mistral AI: pixtral-12b:

https://huggingface.co/mistral-community/pixtral-12b-240910

AlexM4H commented 2 months ago

Since yesterday vllm has internVL2 support. :-)

vllm-project/vllm/releases/tag/v0.6.1

mudler commented 2 months ago

I guess that would work already with llama.cpp GGUF models if/when is getting supported in there ( see also https://github.com/ggerganov/llama.cpp/issues/9440 ).

I'd change the focus of this one to be more generic and add support for multimodal with vLLM, examples:

https://github.com/vllm-project/vllm/blob/main/examples/offline_inference_pixtral.py https://github.com/vllm-project/vllm/blob/main/examples/offline_inference_vision_language_multi_image.py

AlexM4H commented 1 month ago

vllm already has llama 3.2 support https://github.com/vllm-project/vllm/pull/8811

Georgi wrote two weeks ago: "Not much has changes since the issue was created. We need contributions to improve the existing vision code and people to maintain it. There is interest to reintroduce full multimodal support, but there are other things with higher priority that are currently worked upon by the core maintainers of the project." (https://github.com/ggerganov/llama.cpp/issues/8010#issuecomment-2345831496)

mudler commented 1 month ago

See also: https://github.com/ggerganov/llama.cpp/issues/9455

AlexM4H commented 1 month ago

BTW: "(Coming very soon) 11B and 90B Vision models

11B and 90B models support image reasoning use cases, such as document-level understanding including charts and graphs and captioning of images."

(https://ollama.com/blog/llama3.2)

mudler commented 1 month ago

BTW: "(Coming very soon) 11B and 90B Vision models

11B and 90B models support image reasoning use cases, such as document-level understanding including charts and graphs and captioning of images."

(https://ollama.com/blog/llama3.2)

that would be interesting to see given upstream(llama.cpp) is still working on it: https://github.com/ggerganov/llama.cpp/issues/9643

AlexM4H commented 1 month ago

It seems they work independently on that https://github.com/ollama/ollama/pull/6963

mudler commented 1 month ago

It seems they work independently on that ollama/ollama#6963

that looks only golang-side of things to fit the images. The real backend changes seems to be in https://github.com/ollama/ollama/pull/6965

AlexM4H commented 1 month ago

It seems they work independently on that ollama/ollama#6963

that looks only golang-side of things to fit the images. The real backend changes seems to be in ollama/ollama#6965

Oh, yes. Wrong link.