ggerganov / llama.cpp

LLM inference in C/C++
MIT License
64.83k stars 9.29k forks source link

Support for InternVL #6803

Open chigkim opened 4 months ago

chigkim commented 4 months ago

New InternVL-Chat-V1.5 just came out, and the quality is really great, and the benchmark score is pretty high too. Possibly best open source vision language model yet?

Can we have llama.cpp to support it? @cmp-nct, @cjpais, @danbev, @monatis, has any of you tried it?

Demo: https://internvl.opengvlab.com/

paryska99 commented 4 months ago

Would be great

cjpais commented 4 months ago

I am working on a few projects right now, but if I get a chance I will try to get support in (assuming it doesn't already work). I would also like to get moondream support in

2132660698 commented 4 months ago

+1

cjpais commented 4 months ago

fwiw moondream support was merged in #6899, haven't had a chance to look at/try internvl

sapere-aude-incipe commented 4 months ago

I would really like to get InternVL support in llama.cpp.

I have tested the demo extensively and it is really good, so much so that I feel like it is a game changer in many ways. But running it on consumer hardware is not possible right now.

As noted here: https://github.com/InternLM/lmdeploy/issues/1501#issuecomment-2078558853

architecture: InternViT-6B-448px-V1-5 + MLP + InternLM2-Chat-20B I am afraid it cannot fit into A10 (24G) even though LLM weights are quantized into 4 bits.

Is it possible to GGUF the weights to allow for multi GPU splitting or splitting layers between CPU RAM and VRAM? Adding support for InternVL 1.5 would also (probably) make it easier to support future versions when they come out.

Single430 commented 3 months ago

@cjpais Hello, may I ask what is the progress of internvl support now? We are looking forward to using it on llama.cpp.

cjpais commented 3 months ago

Hey I am quite busy with a few projects, it's on my list but just not very high priority at the moment. It's really only something I can do in my spare/free time

Single430 commented 3 months ago

Hey I am quite busy with a few projects, it's on my list but just not very high priority at the moment. It's really only something I can do in my spare/free time

Thank you for your reply. Thank you for your hard work. Looking forward to your future work.

chigkim commented 3 months ago

Which one would be better to focus: CogVLM or InternVL?

I wish there is more resource/interest for language vision models among the llama.cpp community. Llama.cpp is the only hope to run newer language vision models on Apple Silicon. Especially since flash attention python library is not available for Apple Sillicon, you can't even run inference using Torch with MPS support. :(

opisaac9001 commented 3 months ago

Which one would be better to focus: CogVLM or InternVL?

I wish there is more resource/interest for language vision models among the llama.cpp community. Llama.cpp is the only hope to run newer language vision models on Apple Silicon. Especially since flash attention python library is not available for Apple Sillicon, you can't even run inference using Torch with MPS support. :(

Please internVL,. In my tests it works better than CogVLM. Especially for stuff like receipts and documents.

fzzylogic commented 3 months ago

InternVL is quite good. Benchmarks, HF, Demo.

DoiiarX commented 2 months ago

how about now? any update?

James4Ever0 commented 2 months ago

upvote for this

fzzylogic commented 2 months ago

InternLM-XComposer-2.5-7b is out now out and having only tested the image capabilities, it seems great. HF, Demo.

KOG-Nisse commented 2 months ago

This would be great!

v3ss0n commented 2 months ago

Any status on this. this is currently highest performing Vision LLM from user's tests on LocalLLama reddit.

suncloudsmoon commented 1 month ago

Any updates?

CNEA-lw commented 1 month ago

嘿,我有几个项目很忙,它在我的清单上,但目前优先级并不多。这真的只是我可以在业余时间做的事情

I tested the now available InternVL2 model and it is indeed a great choice, I hope to give it a higher priority, thank you for your hard work.

goto-loop commented 1 month ago

InternVL2 would be great to have! Seems to be SOTA in open source vision LLMs

v3ss0n commented 1 month ago

Any thoughts on this? Since Vision models varies alot , compare to LLM models do Maintainers thinks LLamacpp should be focusing on supporting it? Since there are already a lot of LLM models coming out and the core team is doing tremendous work on those already. Do core team feels VLMs should be supported outside of llamacpp project? May be addon/extention architecture viable?

Backendmagier commented 1 month ago

This would be a gamechanger! @cjpais

cjpais commented 1 month ago

I'm sorry I don't know when I can do this, I have a huge backlog of projects I'm currently working on! I am very curious to try it but unfortunately it's not very high priority for me right now

nogifeet commented 3 weeks ago

InternVL2 would be great to have! Seems to be SOTA in open source vision LLMs

+1

v3ss0n commented 3 weeks ago

I think model builder should contribute their vision model works in here.

felixslu commented 3 weeks ago

I think model builder should contribute their vision model works in here.

In an ideal situation,it's model builder's work! but sadly, maybe their work not focus on device,or they have self-deploy server framework,such as LMDeploy.

So, I really hope llama.cpp contributor can support this model, it is really good!

ZhongQiyu commented 2 days ago

I think the devs can add their own branches to the llama.cpp repo or huggingface.co? The 2.5 version of InternVL also got released..can take it a try for transfer as a helper if needed.

v3ss0n commented 1 day ago

I think model builder should contribute their vision model works in here.

In an ideal situation,it's model builder's work! but sadly, maybe their work not focus on device,or they have self-deploy server framework,such as LMDeploy.

If they want to be popular and used by many , that would be the case.

LMDeploy is full of bufferoverflow crashes , not recommended for any secure deployment.