BerriAI / litellm

Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
https://docs.litellm.ai/docs/
Other
10.12k stars 1.13k forks source link

Disable message redaction in logs via request header #4352

Closed msabramo closed 6 days ago

msabramo commented 1 week ago

Title

Relevant issues

Type

๐Ÿ†• New Feature ๐Ÿ› Bug Fix ๐Ÿงน Refactoring ๐Ÿ“– Documentation ๐Ÿš„ Infrastructure โœ… Test

Changes

[REQUIRED] Testing - Attach a screenshot of any new tests passing locall

If UI changes, send a screenshot/GIF of working UI fixes

vercel[bot] commented 1 week ago

The latest updates on your projects. Learn more about Vercel for Git โ†—๏ธŽ

Name Status Preview Comments Updated (UTC)
litellm โœ… Ready (Inspect) Visit Preview ๐Ÿ’ฌ Add feedback Jun 22, 2024 5:11am
krrishdholakia commented 1 week ago

hey @msabramo

can you update the doc to show this - https://docs.litellm.ai/docs/proxy/logging#redacting-messages-response-content-from-langfuse-logging

msabramo commented 1 week ago

can you update the doc to show this - https://docs.litellm.ai/docs/proxy/logging#redacting-messages-response-content-from-langfuse-logging

https://github.com/msabramo/litellm/blob/msabramo/turn-on-message-logging-via-request-header/docs/my-website/docs/proxy/logging.md#redacting-messages-response-content-from-langfuse-logging

krrishdholakia commented 6 days ago

thanks!

@msabramo would welcome a PR to enable passing this via the openai client OR as query params in the api key

Reason: make it easy for someone to enable/disable this with the openai sdk

VIA 'api key'

api_key="sk-1234?disable_message_redaction=true"

VIA 'extra_body'

import openai
client = openai.OpenAI(
    api_key="anything",
    base_url="http://0.0.0.0:4000"
)

# request sent to model set on litellm proxy, `litellm --model`
response = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages = [
        {
            "role": "user",
            "content": "this is a test request, write a short poem"
        }
    ],
    extra_body={ # pass in any provider-specific param, if not supported by openai, https://docs.litellm.ai/docs/completion/input#provider-specific-params
        "metadata": { # ๐Ÿ‘ˆ use for logging additional params
            "disable_message_redaction":  true 
        }
    }
)

print(response)

Thinking aloud, the key/query params might be easier to use when calling the proxy via some other client library (e.g. langchain, continue dev, etc.)

msabramo commented 5 days ago

Thanks! FYI Iโ€™m OOO until July 8. But for when I am back, are there precedents for these 2 ideas? It would be nice to follow existing precedents and borrow code for them if possible.