Mintplex-Labs / anything-llm

The all-in-one Desktop & Docker AI application with built-in RAG, AI agents, and more.
https://anythingllm.com
MIT License
27.47k stars 2.77k forks source link

[FEAT]: Feature Request 「plz support InternLM2.5」 #2631

Closed boshallen closed 6 days ago

boshallen commented 1 week ago

What would you like to see?

Hi,

I noticed that the repository currently lacks support for the InternLM2.5-7B (1.8B, 20B) model, which may cause compatibility issues or missing steps for users trying to implement it. It would be beneficial to update the repository with detailed instructions or tools for integrating and using the InternLM2.5-7B model, ensuring the content remains relevant.

I believe that adding this support would significantly enhance the usability of the project. While some manual adjustments are possible, official guidance or toolchain support would be much more efficient and advantageous, especially for new users.

If possible, including example scripts or demonstrating integration with InternLM2.5-7B would also be a valuable addition.

For further support, please add the InternLM Assistant (WeChat search: InternLM) or join the InternLM Discord(https://discord.com/invite/xa29JuW87d).

timothycarambat commented 6 days ago

You can document quantized GGUF models directly from HuggingFace from our built in LLM as of 1.6.9.

Otherwise, you can also just use an LLM provider who supports InternLM, and it will work. We dont build LLM inference engines, we just connect to them.