Closed krrishdholakia closed 1 year ago
Caveat - this feels like it could be a distraction from the core I/O // integration work.
+1 I felt like I was operating slightly in the blind when using liteLLM. I needed to actualy know what Inputs were getting sent. When I set top_p = 0.1, I needed to know liteLLM actuallly sent it to VertexAI
why did the basic logging not working for you?
def logging_fn(model_call_dict): print(f"model_call_dict: {model_call_dict}")
completion(...,logger_fn=logging_fn)
Looks like we're logging across providers but not adding the optional params consistently.
I think the solution here is to have a base class doing the calling and making sure we're doing great logging around that.
It'd be nice to have an easy way to look at my logs - just so I know things are working / not working. Doesn't need to be fancy. I think segment also provides something like this.
Our supabase integration already writes this to a db, so would it be super hard to just view the logs?