BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
12.65k stars 1.47k forks source link

[Nice-to-have] Can i have a log dashboard? #96

Closed krrishdholakia closed 1 year ago

krrishdholakia commented 1 year ago

It'd be nice to have an easy way to look at my logs - just so I know things are working / not working. Doesn't need to be fancy. I think segment also provides something like this.

Our supabase integration already writes this to a db, so would it be super hard to just view the logs?

Screenshot 2023-08-11 at 7 04 09 AM
krrishdholakia commented 1 year ago

Caveat - this feels like it could be a distraction from the core I/O // integration work.

ishaan-jaff commented 1 year ago

+1 I felt like I was operating slightly in the blind when using liteLLM. I needed to actualy know what Inputs were getting sent. When I set top_p = 0.1, I needed to know liteLLM actuallly sent it to VertexAI

krrishdholakia commented 1 year ago

why did the basic logging not working for you?

def logging_fn(model_call_dict): print(f"model_call_dict: {model_call_dict}")

completion(...,logger_fn=logging_fn)

krrishdholakia commented 1 year ago

Looks like we're logging across providers but not adding the optional params consistently.

I think the solution here is to have a base class doing the calling and making sure we're doing great logging around that.