Open wgimperial opened 2 months ago
If your local models expose an api endpoint that is compatible with any of the already integrated providers, you can use them like this https://docs.portkey.ai/docs/integrations/llms/byollm otherwise you can write your own integration within the gateway (would love to have your contributions) if you have a custom endpoint
So there is no way to route to the local model API interface, i had to write my own code to integrate my model API according to the provider's architecture in the project
Hi @wgimperial please let us know if the doc shared by @narengogi above is useful!
@AI3721 - you can absolutely route to your local model with our Bring your own LLM
integration - as long as you're able to expose the model over an OpenAI compatible API like /chat/completions
.
This means that it can not be routed to an API other than this, for example, the COMFYUI API can not be invoked through this integration
Correct! If ComfyUI exposes endpoints like /images/generations
(from here) though, you could absolutely route to it using Portkey.
I'll explore Portkey's integration with ComfyUI and see what's possible!
@AI3721 we also have the concept of "Gateway for other APIs" where you can just use Portkey as a proxy to call any target provider you want. Would that be useful?
@AI3721 If your local model has an OpenAI compliant endpoint, you can use it without writing an integration. Here's an example with ollama: https://docs.portkey.ai/docs/integrations/llms/ollama
Okay, I see. I think I might have to write my own integration, thank you very much for your answers
What Happened?
I would like to use AI Gateway to manage a series of locally deployed models, but I'm not sure how to proceed.
What Should Have Happened?
No response
Relevant Code Snippet
No response
Your Twitter/LinkedIn
No response