Python SDK, Proxy Server to call 100+ LLM APIs using the OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
Unfortunately we need to wrap the assistants api and it’s lacking logging or token insights atm. We added this in our fork, but would like to get this into the upstream obviously.
The Feature
Motivation, pitch
Twitter / LinkedIn details
No response