BerriAI / litellm

Python SDK, Proxy Server to call 100+ LLM APIs using the OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
12.24k stars 1.42k forks source link

[Feature]: flag for pii masking for just logging integrations #4580

Closed krrishdholakia closed 2 months ago

krrishdholakia commented 2 months ago

The Feature

Expose a flag on our guardrails config for applying pii masking to just the logging logic, not the actual LLM API call

Motivation, pitch

"it would be nice to have the PII masked before ingesting into a long-term analytical store (e.g. langfuse)"

to redact request/response before logging to langfuse, etc. just do this turn_off_message_logging That would presumably disable the whole input/output message. I just want "sensitive" data disabled. I largely want the input/outputs logged NEW

Twitter / LinkedIn details

@brian-tecton-ai

krrishdholakia commented 2 months ago

user asked for it on config

krrishdholakia commented 2 months ago

Possible config

model_list:
  - model_name: gpt-3.5-turbo
    litellm_params:
      model: openai/gpt-3.5-turbo
      api_key: sk-xxxxxxx

litellm_settings:
  guardrails:
    - guardrail1:            # your custom name for guardrail
        callbacks: [presidio, lakera_prompt_injection] # use the litellm presidio callback
        default_on: false # by default this is off for all requests
        logging_only: true # run only on logging, not the actual llm call