Essentially, when you create an AgentFlow in Flowise and setup at least 1 supervisor and 2 workers, where each worker has more than about 4 tools, then about 20% of the time during normal prompts, GPT-4o starts hallucinating an unknown tool named multi_tool_use.parallel which you can see in the Langfuse trace logs with an error like this:
Error: 400 litellm.BadRequestError: AzureException BadRequestError - Error code: 400 - {'error': {'message': "Invalid 'messages[3].tool_calls[0].function.name': string does not match pattern. Expected a string that matches the pattern '^[a-zA-Z0-9_-]+$'.", 'type': 'invalid_request_error', 'param': 'messages[3].tool_calls[0].function.name', 'code': 'invalid_value'}}
Received Model Group=gpt-4o
Available Model Group Fallbacks=None
To be crystal clear, this is clearly a bug within the GPT-4o model. Ultimately, this is something OpenAI needs to fix. HOWEVER, some developers have built "workarounds" for this problem, and I'm wondering if you'd be able/open to building a workaround in Flowise as well.
Describe the bug Hey @HenryHengZJ, after experimenting with multiple AgentFlows in Flowise using GPT-4o, I've encountered a well-known upstream bug with GPT-4o called the "multi_tool_use.parallel bug" as explained here: https://community.openai.com/t/model-tries-to-call-unknown-function-multi-tool-use-parallel/490653 https://community.openai.com/t/the-multi-tool-use-parallel-bug-and-how-to-fix-it/880771
Essentially, when you create an AgentFlow in Flowise and setup at least 1 supervisor and 2 workers, where each worker has more than about 4 tools, then about 20% of the time during normal prompts, GPT-4o starts hallucinating an unknown tool named
multi_tool_use.parallel
which you can see in the Langfuse trace logs with an error like this:To be crystal clear, this is clearly a bug within the GPT-4o model. Ultimately, this is something OpenAI needs to fix. HOWEVER, some developers have built "workarounds" for this problem, and I'm wondering if you'd be able/open to building a workaround in Flowise as well.
Here is an example of one such workaround: https://community.openai.com/t/model-tries-to-call-unknown-function-multi-tool-use-parallel/490653/34
Let me know your thoughts here -- this obviously impacts any Flowise user who wants to use AgentFlows with GPT-4o models.
To Reproduce Steps to reproduce the behavior:
multi_tool_use.parallel
bug get thrown.Expected behavior Ideally, I'd like Flowise to be able to handle and work around this bug.