Open olafgeibig opened 3 months ago
I see a lot of this too. From the verbose logs it looks like the Action Input is incorrectly structuring the task, using to co-worker
instead of coworker
.
For example, this fails:
Action: Ask question to co-worker
Action Input: {
"co-worker": "Analyzer",
"question": "Can you provide me a summary?",
"context": ""
}
But this works:
Action: Ask question to co-worker
Action Input: {
"coworker": "Analyzer",
"question": "Can you provide me a summary?",
"context": ""
}
Interesting. Probably co-worker was a bad choice because there are two correct spellings for it.
in which .py to fix it?
I've had the same problem when LLM (Mistral 0.3 in my case) return action input key as co-worker
instead of required coworker and this caused the same misleading error about absent agent. I fixed it in this PR. Already tested in my fork - it works fine.
I'm using your fork (@madmag77) with Ollama and llama 3 and it's still failing with:
"Thought: I need to break down this complex task into manageable blah blah..
Action Input: {..
"coworker": "architect <-- Thought output truncates here?
Error executing tool. coworker mentioned not found, it must to be one of the following options:
@imars
I'm using your fork (@madmag77) with Ollama and llama 3 and it's still failing with:
"Thought: I need to break down this complex task into manageable blah blah.. Action Input: {.. "coworker": "architect <-- Thought output truncates here? Error executing tool. coworker mentioned not found, it must to be one of the following options:
- architect
Interesting, I'm using it with LM Studio + Phi3 medium or Mistral Instruct 0.3 and it works fine, especially with Mistral (Phi sometimes fails on other steps in my flow, but delegation works perfect). When I tried LM Studio + Llama 3 7B it doesn't work at all - Llama just tries to do everything in one response: delegate work, add observations, add imaginary answer from delegate and finish the work :)
I'll try to install Ollama and try LLama3 again later on though - curios how it would work...
I'm using your fork (@madmag77) with Ollama and llama 3 and it's still failing with:
"Thought: I need to break down this complex task into manageable blah blah.. Action Input: {.. "coworker": "architect <-- Thought output truncates here? Error executing tool. coworker mentioned not found, it must to be one of the following options:
- architect
@imars I finally tested on Llama 3 with Ollama and LM Studio and made another fix in my fork after that it started working. You may want to try it as well to check
The agent wants to delegate a task to a co-worker. Although the name is correct, the co-worker is not found. crewAI 0.22.5, MacOS, Python 3.11.7, Ollama running adrienbrault/nous-hermes2pro:Q5_K_M
Output:
The Agent definition: