FlowiseAI / Flowise

Drag & drop UI to build your customized LLM flow
https://flowiseai.com
Apache License 2.0
29.8k stars 15.34k forks source link

sequential agents Complex Agent Setup with Multiple Chat Models: Tool Usage and Language Model Integration Issues #2939

Open malek262 opened 1 month ago

malek262 commented 1 month ago

Issue with Multi-Model Agent Setup in Flowise AI

I'm experiencing issues with a complex agent setup in Flowise AI, attempting to use multiple language models for different purposes. Here's a detailed summary of the problem:

I apologize in advance for any linguistic imperfections, as English is not my native language. I'm using ChatGPT to assist me in articulating this issue in English.

I want to clarify that I'm not a professional developer, but rather an enthusiastic beginner eager to learn and experiment with these technologies.

Thank you for your understanding and patience as I navigate these complex systems.

1. Single Model Setup (Working) - "Tools_agent"

Components:

This setup works well with tools but performs poorly with the Arabic language.

When trying to use Gemini Pro in the working single-model setup (Tools_agent), I receive the following error on every message:

[GoogleGenerativeAI Error]: Error fetching from https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-latest:streamGenerateContent?alt=sse: [400 Bad Request] * GenerateContentRequest.tools[0].function_declarations[0].parameters.properties: should be non-empty for OBJECT type

2. Multi-Model Setup (Not Working as Expected) - "tools-9" (sequential agents)

Components:

Issue: When a tool is needed, the system indicates it will transfer to the tool agent but fails to do so. The Condition Agent doesn't seem to be properly routing to the tool-using Agent.

When using only GroqChat (using llama-3.1-70b-versatile model) in this flow, I get this error:

Error buildAgentGraph - 400 {"error":{"message":"Failed to call a function. Please adjust your prompt. See 'failed_generation' for more details.","type":"invalid_request_error","code":"tool_use_failed","failed_generation":"CONVERSATION"}}

Using the model "llama3-groq-8b-8192-tool-use-preview" works without errors. The agent is correctly invoked and tools are used properly. However, the Arabic language output is very weak, containing many errors and random words in other languages.

3. Simplified Setup Observations

Questions

  1. Is there a known compatibility issue between Gemini Pro and certain types of tool declarations in Flowise?
  2. In the multi-model setup (tools-9), what could be preventing the handoff from the conversation model (ChatGoogleGenerativeAI) to the tool-using model (Agent with Groq Llama 3.1)?

Any guidance or suggestions on how to resolve these issues would be greatly appreciated. I'm happy to provide any additional information or logs that might be helpful in diagnosing the problem.

My goal is to be able to use Google Gemini Pro 1.5, as it is one of the few models that can communicate and interact with users in Arabic without linguistic issues or errors. For this reason, I am trying to solve this problem so that I can use this model for regular conversations and chats, and use another model for tool usage, given that I am unable to use the same model for both tasks.

[EDIT]

After conducting numerous experiments, changing models, testing general behavior, and trying to identify the cause of the issue, I noticed something important. Sometimes, even after clearing the conversation history and starting fresh without any previous chats, it appears that the model occasionally, though not always (it has happened only twice with me), responds based on previous requests. It uses tools to retrieve information I had previously asked for in multiple conversations, not just one. The strange part is that these conversations took place more than 9 hours earlier, and I had cleared the chat history.

There might be an issue with the agent memory as well; I honestly don't know the exact reason. I'm just speculating.

Screenshot from 2024-08-05 21-29-03 Screenshot from 2024-08-05 21-26-55

HenryHengZJ commented 1 month ago

1.) Is there a known compatibility issue between Gemini Pro and certain types of tool declarations in Flowise? I personally finds that Gemini sometimes is not working well with tools as compared to models like gpt4 or claude. The biggest factor I can think of is how we contruct the prompt for tool description. In Flowise we dont do any special handling logic, its treated the same for every other models.

2.) In the multi-model setup (tools-9), what could be preventing the handoff from the conversation model (ChatGoogleGenerativeAI) to the tool-using model (Agent with Groq Llama 3.1)? I'd suggest trying to use JSON Structured Output of LLM + Condition as an alternative solution: image