Closed msabramo closed 6 days ago
The latest updates on your projects. Learn more about Vercel for Git โ๏ธ
Name | Status | Preview | Comments | Updated (UTC) |
---|---|---|---|---|
litellm | โ Ready (Inspect) | Visit Preview | ๐ฌ Add feedback | Jun 22, 2024 5:11am |
hey @msabramo
can you update the doc to show this - https://docs.litellm.ai/docs/proxy/logging#redacting-messages-response-content-from-langfuse-logging
can you update the doc to show this - https://docs.litellm.ai/docs/proxy/logging#redacting-messages-response-content-from-langfuse-logging
thanks!
@msabramo would welcome a PR to enable passing this via the openai client OR as query params in the api key
Reason: make it easy for someone to enable/disable this with the openai sdk
api_key="sk-1234?disable_message_redaction=true"
import openai
client = openai.OpenAI(
api_key="anything",
base_url="http://0.0.0.0:4000"
)
# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages = [
{
"role": "user",
"content": "this is a test request, write a short poem"
}
],
extra_body={ # pass in any provider-specific param, if not supported by openai, https://docs.litellm.ai/docs/completion/input#provider-specific-params
"metadata": { # ๐ use for logging additional params
"disable_message_redaction": true
}
}
)
print(response)
Thinking aloud, the key/query params might be easier to use when calling the proxy via some other client library (e.g. langchain, continue dev, etc.)
Thanks! FYI Iโm OOO until July 8. But for when I am back, are there precedents for these 2 ideas? It would be nice to follow existing precedents and borrow code for them if possible.
Title
Relevant issues
Type
๐ New Feature ๐ Bug Fix ๐งน Refactoring ๐ Documentation ๐ Infrastructure โ Test
Changes
[REQUIRED] Testing - Attach a screenshot of any new tests passing locall
If UI changes, send a screenshot/GIF of working UI fixes