Open GianNuzzarello opened 1 month ago
Hey @GianNuzzarello!
This is an error with a change we made to tool calling. Super sorry about this bug.
We just realized the error and will be cutting a new version (0.76.3) later tonight.
I'll close out this issue and let you know as soon as the new version is cut.
Just as a heads up, I tried running your crew on the new version and your crew started providing the correct answer.
@bhancockio looks like I'm having the same issue in version 0.80.0 when using Ollama llm. It works when I use OpenAI.
Has this issue been addressed? Should I open a new issue?
@bhancockio looks like I'm having the same issue in version 0.80.0 when using Ollama llm. It works when I use OpenAI.
Has this issue been addressed? Should I open a new issue?
Me too.
Description
When used process="Hierarchical" and set an llm manager, it seems to give completely wrong answers and does not seem to delegate the query to other agents.
Steps to Reproduce
Expected behavior
The system should correctly identify that the question is about Futel and delegate the task to the Futel Official Infopoint agent. The Futel Official Infopoint agent will then provide the appropriate answer to the user.
Screenshots/Code snippets
Operating System
macOS Sonoma
Python Version
3.12
crewAI Version
0.76.2
crewAI Tools Version
0.13.2
Virtual Environment
Venv
Evidence
This is the answer:
Possible Solution
None
Additional context
Please, improve documentation and examples on manager llm, especially in tasks where collaboration between agents is required.