Closed krrishdholakia closed 2 months ago
user asked for it on config
Possible config
model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: openai/gpt-3.5-turbo
api_key: sk-xxxxxxx
litellm_settings:
guardrails:
- guardrail1: # your custom name for guardrail
callbacks: [presidio, lakera_prompt_injection] # use the litellm presidio callback
default_on: false # by default this is off for all requests
logging_only: true # run only on logging, not the actual llm call
The Feature
Expose a flag on our guardrails config for applying pii masking to just the logging logic, not the actual LLM API call
Motivation, pitch
"it would be nice to have the PII masked before ingesting into a long-term analytical store (e.g. langfuse)"
to redact request/response before logging to langfuse, etc. just do this turn_off_message_logging That would presumably disable the whole input/output message. I just want "sensitive" data disabled. I largely want the input/outputs logged NEW
Twitter / LinkedIn details
@brian-tecton-ai