Open chymian opened 2 months ago
Thanks for raising @chymian this is a high priority ticket at the moment. We'll keep you posted.
llama2 on ollama, works but its a little bit dumb (cant pick the right tool). working on improving this and adding support for many more.
@charl3sj priority list for models to implement:
@charl3sj priority list for models to implement:
- [ ] vertex https://python.langchain.com/docs/integrations/chat/google_vertex_ai_palm/
- [x] groq https://python.langchain.com/docs/integrations/chat/groq/
- [x] anthropic https://python.langchain.com/docs/integrations/chat/anthropic/
- [ ] huggingface https://python.langchain.com/docs/integrations/chat/huggingface/
- [x] cohere https://python.langchain.com/docs/integrations/chat/cohere/
- [ ] mistral https://python.langchain.com/docs/integrations/chat/mistralai/
did you have a look at litellm.ai? one solution to rule them all ;)
Is your feature request related to a problem? Please describe. Local LLMs are not suficent supported.
Describe the solution you'd like since most of the local loader, like ollama, ooba's, etc all support openAI API >= 1.0, we would just need a BASE_URL field to fill in our local path to API. the LLMstudio (which is not opensource.) entry had such a field, but it's gone.
Describe alternatives you've considered Build in litellm.ai as access to 100+ llms
Additional context