FlowiseAI / Flowise

Drag & drop UI to build your customized LLM flow
https://flowiseai.com
Apache License 2.0
32.03k stars 16.71k forks source link

[BUG] Agentflow stopped working #3615

Open sebaxakerhtc opened 2 days ago

sebaxakerhtc commented 2 days ago

Describe the bug To understand the BUG I've created a simple Agentflow with ChatOllama, supervisor and 2 workers. Using model llama3.1 - all works perfect! Then changed the model to qwen2.5-coder - again worked fine Then I changed the model to qwen2.5-coder:14b-base-q8_0 and all stopped working. Very strange, because the model llama3.1:8b-instruct-q2_K works too When I write a message and click sen - I see "message stopped". But I can only think about it because there's no errors or any logs about it in Ollama or Flowise

To Reproduce Steps to reproduce the behavior:

  1. Go to 'Agentflows'
  2. Create new Agentflow or use template with supoervisor and workers
  3. Use model qwen2.5-coder:14b-base-q8_0
  4. See green popup "message stopped"

Expected behavior This should work like with any other model

Screenshots If applicable, add screenshots to help explain your problem.

Flow If applicable, add exported flow in order to help replicating the problem.

Setup

Additional context

Cirr0e commented 2 days ago

Hey, looks like this is a known compatibility issue with certain model variants. The green "message stopped" popup without logs typically happens when there are model compatibility problems.

From a similar issue (#2557), this seems to be related to how different model variants handle agent-based workflows. Could you try using the base qwen2.5-coder model without the quantization suffix (14b-base-q8_0)? That version was shown to work in your testing.

If you absolutely need to use the quantized version, you might want to test with llama3.2 since it has better support for agent-based workflows, similar to what was resolved in #3361.

Let me know if either of those options helps!

sebaxakerhtc commented 2 days ago

@Cirr0e Hello. Yes - other models works. But from where this problem is? Maybe we can turn on additional logs?

sebaxakerhtc commented 2 days ago

@Cirr0e Interesting... On my another instance I reproduced this bug with llama3.2 and animal template like in https://github.com/FlowiseAI/Flowise/issues/3361 Animal Agents.json

Another interesting thing: If I use template from marketplace - all works fine with old ChatOllama, but if I upgrade them by clicking on "Sync" button... It stops working.

Cirr0e commented 1 day ago

Hmm, that is interesting. Let's try to get more visibility into what's happening:

  1. First, let's enable debug logging to see what's happening under the hood. Add this to your Docker environment:

    DEBUG=true

    You can do this by either:

    • Adding it to your docker-compose.yml file
    • Or setting it as an environment variable: docker run -e DEBUG=true ...
  2. The silent failure suggests there might be an incompatibility between the quantized model format and the agent's message parsing. This typically happens when the model output doesn't match the expected format that the agent executor expects.

  3. About the sync button issue: When you sync, it's likely updating the ChatOllama node to a newer version that might have different expectations for model responses. Try this:

    • Before clicking sync, export your working flow
    • After sync fails, compare the ChatOllama node configurations

The code shows that agent execution is controlled by this sequence:

const executor = AgentExecutor.fromAgentAndTools({
    agent,
    tools,
    verbose: process.env.DEBUG === 'true' ? true : false,
    maxIterations: maxIterations ? parseFloat(maxIterations) : undefined
})

Can you try enabling debug logging and share:

  1. What errors/logs you see in the console?
  2. The exact ChatOllama node configuration before and after sync?

This will help us pinpoint exactly where the communication is breaking down between the model and the agent framework.

Be careful with:

References:

Let me know what the debug logs show and we can dig deeper into the specific cause.

sebaxakerhtc commented 1 day ago

I can't reproduce any bug on my second instance after adding DEBUG=true. Placebo effect? :)

sebaxakerhtc commented 1 day ago

looks like this is a known compatibility issue with certain model variants

So, if this is a known BUG - it's better to close the issue?

sebaxakerhtc commented 16 hours ago

DEBUG=true

Added this to my .env file (uncommented)

DEBUG=true
LOG_LEVEL=debug

Still no logs... Any ideas? Testing Q4, Q4_K_M, Q8 models - nothing works. model:IlyaGusev/saiga_nemo_12b_gguf