Open jamesbraza opened 1 month ago
Hey @jamesbraza can you share a code example of what you expect?
Sure, I recently wrote something like this:
from litellm import acompletion
TOOL_CHOICE_REQUIRED = "required"
tool_choice: Tool | str | None = TOOL_CHOICE_REQUIRED
completion_kwargs: dict[str, Any] = {}
# SEE: https://platform.openai.com/docs/guides/function-calling/configuring-function-calling-behavior-using-the-tool_choice-parameter
expected_finish_reason: set[str] = {"tool_calls"}
if isinstance(tool_choice, Tool):
completion_kwargs["tool_choice"] = {
"type": "function",
"function": {"name": tool_choice.info.name},
}
expected_finish_reason = {"stop"} # TODO: should this be .add("stop") too?
elif tool_choice is not None:
completion_kwargs["tool_choice"] = tool_choice
if tool_choice == TOOL_CHOICE_REQUIRED:
# Even though docs say it should be just 'stop',
# in practice 'tool_calls' shows up too
expected_finish_reason.add("stop")
model_response = await acompletion(
"gpt-4o",
messages=...,
tools=...,
**completion_kwargs,
)
if (num_choices := len(model_response.choices)) != 1:
raise MalformedMessageError(
f"Expected one choice in LiteLLM model response, got {num_choices}"
f" choices, full response was {model_response}."
)
choice = model_response.choices[0]
if choice.finish_reason not in expected_finish_reason:
raise MalformedMessageError(
f"Expected a finish reason in {expected_finish_reason} in LiteLLM"
f" model response, got finish reason {choice.finish_reason!r}, full"
f" response was {model_response} and tool choice was {tool_choice}."
)
# Process choice ...
Note how it has to:
tool_choice
can be a str
. Ideally it can be an StrEnum
that comes from LiteLLMtool_choice
, the expected_finish_reason
is specifiedfinish_reason
in the responseI would like to upstream at least item 1 and 2 into LiteLLM, mainly because LiteLLM handles almost all of our LLM logic besides this at the moment
The Feature
From https://platform.openai.com/docs/api-reference/chat/create#chat-create-tool_choice,
tool_choice
can be:str
valuesdict
specifically naming a tool:{"type": "function", "function": {"name": "my_tool_name"}}
From https://platform.openai.com/docs/guides/function-calling/configuring-function-calling-behavior-using-the-tool_choice-parameter, we see the response's
finish_reason
is a function oftool_choice
.It would be nice if LiteLLM provided an
Enum
that could handle the logic:tool_choice
Enum
) that convertstool_choice
to expectedfinish_reason
, for response validationAlternately, perhaps LiteLLM can add an opt-in flag to
acompletion
that validates thefinish_reason
matches the inputtool_choice
andtools
Motivation, pitch
Enabling clients to not have to care about calculating the
finish_reason
, but have a validation confirming its correctTwitter / LinkedIn details
No response