-
It should be mentioned in the README that an openAI key is mandatory if using a self-hosted proxy and that it needs to be configured in the secret manager. Additionally, it would also be nice to give …
-
Openai api key and proxy needs subscription,
Is there any alternative that you can suggest us instead of open ai
-
### Describe the bug
I am using LiteLLM proxy server with custom configured model names and costs. In Langfuse, I utilize normal OpenAI integration like so:
```
import os
import httpx
from …
-
### What happened?
When using the moderation from openai, littlellm seem to send the prefix "openai/" to the openai api
```javascript
const response = await openai.moderations.create({
input:…
-
I want to custom the openai api_base in order to use some proxy or other sites.
I would very much like to know if you can help implement this function, thank you.
-
Description
Hey there 👋 ! I am trying to integrate a router model solution into Braintrust Proxy and I encountered a problem that Perplexity and Mistral models are not working.
Steps to reproduc…
-
### **1) Screenshots:**
**Failed to fetch error:**
---
**Command Worked in terminal in Cursor:**
---
### **2) System Info**
Device:
`MacOS 14.5 (23F79) Apple Sillicon`
Curs…
-
Would be nice to allow users to specify either a LiteLLM Proxy instance or their own Azure OpenAI instance in addition to support for direct OpenAI and AWS Claude
-
**Is your feature request related to a problem? Please describe.**
Currently this integration does not support Azure OpenAI which is works slightly differently to a standard OpenAI custom server.
…
-
Hello,
Thank you for creating [openai-server.py](https://github.com/NVIDIA/TensorRT-LLM/blob/main/examples/apps/openai_server.py). It has been very helpful in avoiding the need to use vLLM or other O…