BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
12.72k stars 1.48k forks source link

[Bug]: Even setting local debugging to false, LiteLLM is still logging post requests #3093

Open josh-ashkinaze opened 5 months ago

josh-ashkinaze commented 5 months ago

What happened?

I set litellm.set_verbose = False and before calling completion I have a log file set up to monitor other activities.

logging.basicConfig(filename=f"{os.path.splitext(os.path.basename(__file__))[0]}.log", level=logging.INFO, format='%(asctime)s: %(message)s', filemode='w', datefmt='%Y-%m-%d %H:%M:%S')

But now whenever I do a completion it still writes POSt requests to my log file even though I set verbose to False. For example, I see output such as

curl -X POST \
https://api.openai.com/v1/ \
-d '{'model': 'gpt-4-0613', 'messages': [{'content': 'INSTRUCTIONS\nblah', 'role': 'user'}], 'extra_body': {}}'


2024-04-17 12:45:38: HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
2024-04-17 12:45:38: Wrapper: Completed Call, calling success_handler
2024-04-17 12:45:38: 

Relevant log output

No response

Twitter / LinkedIn details

No response

krrishdholakia commented 5 months ago

@josh-ashkinaze this looks like it's from the logger library.

You might have logging enabled.

try running it with this, and let me know if it shows up:

import logging

# Disable all logging messages
logging.disable(logging.CRITICAL)