Open sebaxakerhtc opened 2 days ago
Hey, looks like this is a known compatibility issue with certain model variants. The green "message stopped" popup without logs typically happens when there are model compatibility problems.
From a similar issue (#2557), this seems to be related to how different model variants handle agent-based workflows. Could you try using the base qwen2.5-coder
model without the quantization suffix (14b-base-q8_0)? That version was shown to work in your testing.
If you absolutely need to use the quantized version, you might want to test with llama3.2
since it has better support for agent-based workflows, similar to what was resolved in #3361.
Let me know if either of those options helps!
@Cirr0e Hello. Yes - other models works. But from where this problem is? Maybe we can turn on additional logs?
@Cirr0e Interesting...
On my another instance I reproduced this bug with llama3.2
and animal template like in https://github.com/FlowiseAI/Flowise/issues/3361
Animal Agents.json
Another interesting thing: If I use template from marketplace - all works fine with old ChatOllama, but if I upgrade them by clicking on "Sync" button... It stops working.
Hmm, that is interesting. Let's try to get more visibility into what's happening:
First, let's enable debug logging to see what's happening under the hood. Add this to your Docker environment:
DEBUG=true
You can do this by either:
docker run -e DEBUG=true ...
The silent failure suggests there might be an incompatibility between the quantized model format and the agent's message parsing. This typically happens when the model output doesn't match the expected format that the agent executor expects.
About the sync button issue: When you sync, it's likely updating the ChatOllama node to a newer version that might have different expectations for model responses. Try this:
The code shows that agent execution is controlled by this sequence:
const executor = AgentExecutor.fromAgentAndTools({
agent,
tools,
verbose: process.env.DEBUG === 'true' ? true : false,
maxIterations: maxIterations ? parseFloat(maxIterations) : undefined
})
Can you try enabling debug logging and share:
This will help us pinpoint exactly where the communication is breaking down between the model and the agent framework.
Be careful with:
References:
Let me know what the debug logs show and we can dig deeper into the specific cause.
I can't reproduce any bug on my second instance after adding DEBUG=true. Placebo effect? :)
looks like this is a known compatibility issue with certain model variants
So, if this is a known BUG - it's better to close the issue?
DEBUG=true
Added this to my .env file (uncommented)
DEBUG=true
LOG_LEVEL=debug
Still no logs...
Any ideas?
Testing Q4, Q4_K_M, Q8 models - nothing works.
model:IlyaGusev/saiga_nemo_12b_gguf
Describe the bug To understand the BUG I've created a simple Agentflow with ChatOllama, supervisor and 2 workers. Using model
llama3.1
- all works perfect! Then changed the model toqwen2.5-coder
- again worked fine Then I changed the model toqwen2.5-coder:14b-base-q8_0
and all stopped working. Very strange, because the modelllama3.1:8b-instruct-q2_K
works too When I write a message and click sen - I see "message stopped". But I can only think about it because there's no errors or any logs about it in Ollama or FlowiseTo Reproduce Steps to reproduce the behavior:
qwen2.5-coder:14b-base-q8_0
Expected behavior This should work like with any other model
Screenshots If applicable, add screenshots to help explain your problem.
Flow If applicable, add exported flow in order to help replicating the problem.
Setup
Additional context