crewAIInc / crewAI

Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
https://crewai.com
MIT License
18.67k stars 2.57k forks source link

Error "Co-worker mentioned not found..." when using with local llama3 #620

Closed italovieira closed 5 days ago

italovieira commented 3 months ago

With the example below I get the following error:

Error executing tool. Co-worker mentioned not found, it must to be one of the following options:
- pilot
from crewai import Agent, Task, Crew

from langchain_community.llms import Ollama

import os

os.environ["OPENAI_API_KEY"] = "NA"

llm = Ollama(model="llama3")

# Agents
luke = Agent(
    role="pilot",
    goal="Destroy the Death Star",
    backstory="The young destined-to-be-Jedi pilot, summoned to attack the Death Star.",
    llm=llm,
)

leia = Agent(
    role="strategist",
    goal="Coordinate the attack on the Death Star",
    backstory="The Rebel leader, essential for strategy and communication.",
    llm=llm,
)

# Tasks
coordinate_attack = Task(
    description="""Leia must coordinate the mission,
    maintaining communication and providing strategic support.
    Leia must ensure that everything is in order, providing a safe path for Luke""",
    expected_output="""Successfully coordinated attack, Death Star destroyed. All units informed and aligned.""",
    agent=leia,
    allow_delegation=True,
)

destroy_death_star = Task(
    description="""Luke must pilot his X-Wing and shoot at the Death Star's weak point to destroy it.""",
    expected_output="""Death Star destroyed, mission successful.""",
    agent=luke,
)

# Crews
rebel_alliance = Crew(
    agents=[leia, luke],
    tasks=[coordinate_attack, destroy_death_star],
    verbose=2,
)

rebel_alliance.kickoff()
noggynoggy commented 3 months ago

There are multiple issues with your code.

  1. manager llm only works with proccess.hierarchical
  2. it is generally recommended to use English for promoting and then just instruct to "translate" the response
  3. When using Ollama use the ollama model class not OpenAI from langchain_community.llms import Ollama

If i havn't missed anything, the coworker not being found is either because points 1-3 or because llama3 is "too dumb"

noggynoggy commented 3 months ago

Actually this might be related to #602

italovieira commented 3 months ago

There are multiple issues with your code.

1. manager llm only works with proccess.hierarchical

2. it is generally recommended to use English for promoting and then just instruct to "translate" the response

3. When using Ollama use the ollama model class not OpenAI `from langchain_community.llms import Ollama `

I've updated the code in the description with what you indicated.

For the point 3, I used ChatOpenAI just like the example in the doc https://docs.crewai.com/how-to/LLM-Connections/#ollama-integration-ex-for-using-llama-2-locally, though.

Either way, the error still occurs.

italovieira commented 3 months ago

This might be a problem in how the ollama or langchain outputs the steps for the agents.

But I did a bisect and found out crewAI was able to cope with that before https://github.com/joaomdmoura/crewAI/commit/0b781065d277564077fdaf630d46995c210cc9d1. It was only after this commit this error started.

Yazington commented 3 months ago

Hey @italovieira have you been able to fix it? Getting same issue :(

italovieira commented 3 months ago

Hey @italovieira have you been able to fix it? Getting same issue :(

I've opened a MR to fix this issue, but it's not merged yet.

madmag77 commented 3 months ago

Without logs it's hard to figure out the reason. I've had the same problem when LLM (Mistral 0.3 in my case) return action input key as co-worker instead of required coworker and this caused the same misleading error about absent agent. I fix it in this PR. Already tested in my fork - it works fine. Maybe you can try this fix and see the result....

psyq0 commented 2 months ago

Sorry to ask, maybe I missed something. Did you solved this issue with an workaround? If yes can please let me know how ? Thanks in advance!

madmag77 commented 2 months ago

@psyq0 If you asking about co-worker problem then yes, I made a fork and raised a PR to the this repo but it's not merged yet. I managed to make example with tools and delegation work on open source LLMs - this is the example. It works with LM Studio + Mistral 0.3, Phi3 medium and Llama 3 7B, also with Ollama + Llama 3 7B.

Hope it helps.

Sisif-eu commented 2 months ago

Sorry to ask, maybe I missed something. Did you solved this issue with an workaround? If yes can please let me know how ? Thanks in advance!

For me the fix from @madmag77 didn't work. It seems that for me the reason it is not able to match the co-worker role is that i got an extra " in the role name.

Comparing available_agents to agent in agent_tools.py :

"senior research analyst
['senior research analyst', 'tech content strategist', 'french translator']

So as a dirty fix i replaced this line with the following one :

if available_agent.role.casefold().strip() == agent.casefold().strip().replace('"', '')

And now it works with llama3:8b without issue.

A bit dirty but i hope it helps waiting for a fix.

For reference this is how i initialize the agent :

def initialize_agent(agent):
    initialized_agent = crewai.Agent(
        role=agent.role,
        goal=agent.goal,
        backstory=agent.backstory,
        verbose=agent.verbose,
        allow_delegation=agent.allow_delegation,
        llm = ChatOllama(
                    model = 'llama3:8b",
                    base_url = "http://localhost:11434"
         )

    )
    return initialized_agent
github-actions[bot] commented 1 week ago

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] commented 5 days ago

This issue was closed because it has been stalled for 5 days with no activity.