🕹️ Open-source, developer-first LLMOps platform designed to streamline prompt design, version management, instant delivery, collaboration, troubleshooting, observability and more.
It would be great to use non-OpenAI APIs. For example, in the real setting I can use Llama2 or even FlanT5, host it somewhere, expose API endpoint, set up pezzo proxy to proxy requests and submit the custom headers.
Right now a similar functionality is implemented in LangChain using custom callbacks. That could be a way as well.
Use-Case
No response
Is this a feature you are interested in implementing yourself?
Hello, can this project support other LLM models? For example, Gemini for example(vertex ai) or other third-party projects. Are there any plans to add this feature in the future? Thanks :)
Proposal
It would be great to use non-OpenAI APIs. For example, in the real setting I can use Llama2 or even FlanT5, host it somewhere, expose API endpoint, set up pezzo proxy to proxy requests and submit the custom headers. Right now a similar functionality is implemented in LangChain using custom callbacks. That could be a way as well.
Use-Case
No response
Is this a feature you are interested in implementing yourself?
Maybe