Open andreibondarev opened 1 month ago
I seems Google Gemini does support parallel function calling, see:
https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling#supported_models https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling#parallel-samples
I seems Google Gemini does support parallel function calling, see:
https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling#supported_models https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling#parallel-samples
It does, but there's no way to configure whether functions can be called in parallel or not.
Is your feature request related to a problem? Please describe. We'd like to enable better control of tool calling when using Langchain::Assistant. Some of the supported LLMs (Anthropic and OpenAI) let you modify whether parallel tool calls ("multiple tool calls") can be made or not. In some use-cases the Assistant must call tools sequentially hence we should be able to toggle that option on the Assistant instance.
Describe the solution you'd like Similar to
tool_choice
enable the developer to toggle:Tasks
Langchain::Assistant::LLM::Adapters::Anthropic
supportLangchain::Assistant::LLM::Adapters::OpenAI
supportLangchain::Assistant::LLM::Adapters::GoogleGemini
support (not currently supported)Langchain::Assistant::LLM::Adapters::MistralAI
support (not currently supported)Langchain::Assistant::LLM::Adapters::Ollama
support (not currently supported)