BerriAI / litellm

Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
https://docs.litellm.ai/docs/
Other
10.05k stars 1.12k forks source link

[Feature]: Support OpenAI's `parallel_tool_calls` #4235

Closed minhduc0711 closed 1 week ago

minhduc0711 commented 1 week ago

The Feature

OpenAI recently added a new parallel_tool_calls parameter.

parallel_tool_calls: boolean, Optional, Defaults to true

Whether to enable parallel function calling during tool use.

Would be nice if we add the same parameter to the completion function.

Motivation, pitch

This is useful when I want only one tool call returned, without extra completion tokens being used for redundant tool calls.

Twitter / LinkedIn details

No response

krrishdholakia commented 1 week ago

doesn't this already work? @minhduc0711

minhduc0711 commented 1 week ago

My bad, I wasn't aware that LiteLLM has already supported provider-specific params.

From https://docs.litellm.ai/docs/completion/input#provider-specific-params:

Providers might offer params not supported by OpenAI (e.g. top_k). You can pass those in 2 ways:

  • via completion(): We'll pass the non-openai param, straight to the provider as part of the request body. e.g. completion(model="claude-instant-1", top_k=3)
  • via provider-specific config variable (e.g. litellm.OpenAIConfig()).