BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
13.85k stars 1.63k forks source link

[Feature]: add support for mocking tool/function completion response #3234

Open jonasdebeukelaer opened 6 months ago

jonasdebeukelaer commented 6 months ago

The Feature

Add ability to mock a completion response when we are making use of tools/function, much like the simpler message mock response.

e.g.

from litellm import completion 
from openai.types.chat import ChatCompletionMessageToolCall

model = "gpt-3.5-turbo"
messages = [{"role":"user", "content":"This is a test request"}]

mocked_resp = completion(
  model=model, 
  messages=messages, 
  mock_tool_calls_response=ChatCompletionMessageToolCall(arguments=...)

and maybe with a nicer way to init the ChatCompletionMessageToolCall such that we don't have to be importing openai bits directly?

Motivation, pitch

The mock feature is great, but is not usable if we are making use of tools / functions as the response object would not correspond to the expected response format.

Open to adding this feature if it seems like a reasonable first contribution on here 👍

Twitter / LinkedIn details

https://www.linkedin.com/in/jonasdebeuk/

krrishdholakia commented 6 months ago

This is a great idea! @jonasdebeukelaer

Would welcome a contribution here.

Curious - how do you use the mock feature today?

ishaan-jaff commented 6 months ago

+1 this would have been super helpful - @jonasdebeukelaer are you planning on working on this ? would love it

jonasdebeukelaer commented 6 months ago

@krrishdholakia currently not using the mock feature it as I'm only making use of tools in my project, but would be used simply for functional tests really.

I don't immediately have capacity to work on this, so would be a couple weeks away if I do do it, sorry!