crewAIInc / crewAI

Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
https://crewai.com
MIT License
20.61k stars 2.85k forks source link

[BUG] Setting up a manager agent using local Ollama LLMs. #1466

Open rawzone opened 1 week ago

rawzone commented 1 week ago

Description

Having problems getting a agent manage to work.

Using the crewai create crew command to setup a project.
Which uses different syntax than all the documentations, which is a bit strange in it self?

Just using the "supplied" agents and tasks while setting up the manager agent. So not many changes to the code beside changing for using local Ollama host and LLM models (See code in crew.py.

Steps to Reproduce

  1. Setup a new crew with crewai create crew <crew_name>.
  2. Change agents.yaml.
  3. Change crew.py to include the manager (See code).
  4. Running the crew with the crewai run command.
  5. Execution fails with supplied error.

Expected behavior

Run the crew and have the manager agent managing other agents,

Screenshots/Code snippets

main.py:

#!/usr/bin/env python
import sys
from mornings.crew import MorningsCrew

def run():
    """
    Run the crew.
    """
    inputs = {
        'topic': 'AI LLMs'
    }
    MorningsCrew().crew().kickoff(inputs=inputs)

def train():
    """
    Train the crew for a given number of iterations.
    """
    inputs = {
        "topic": "AI LLMs"
    }
    try:
        MorningsCrew().crew().train(n_iterations=int(sys.argv[1]), filename=sys.argv[2], inputs=inputs)

    except Exception as e:
        raise Exception(f"An error occurred while training the crew: {e}")

def replay():
    """
    Replay the crew execution from a specific task.
    """
    try:
        MorningsCrew().crew().replay(task_id=sys.argv[1])

    except Exception as e:
        raise Exception(f"An error occurred while replaying the crew: {e}")

def test():
    """
    Test the crew execution and returns the results.
    """
    inputs = {
        "topic": "AI LLMs"
    }
    try:
        MorningsCrew().crew().test(n_iterations=int(sys.argv[1]), openai_model_name=sys.argv[2], inputs=inputs)

    except Exception as e:
        raise Exception(f"An error occurred while replaying the crew: {e}")

crew.py:

from crewai import Agent, Crew, Process, Task, LLM
from crewai.project import CrewBase, agent, crew, task

# Setup connection to local Ollama instance
llm_host_model = LLM(
    model="ollama/llama3.2",
    base_url="http://192.168.1.18:11434"
    )

@CrewBase
class MorningsCrew():
    """Mornings crew"""

    @agent
    def project_manager(self) -> Agent:
        return Agent(
            config=self.agents_config['project_manager'],
            llm=llm_host_model,
            allow_delegation=True,
            verbose=True
        )

    @agent
    def researcher(self) -> Agent:
        return Agent(
            config=self.agents_config['researcher'],
            llm=llm_host_model,
            # tools=[MyCustomTool()], # Example of custom tool, loaded on the beginning of file
            verbose=True
        )

    @agent
    def reporting_analyst(self) -> Agent:
        return Agent(
            config=self.agents_config['reporting_analyst'],
            llm=llm_host_model,
            verbose=True
        )

    @task
    def research_task(self) -> Task:
        return Task(
            config=self.tasks_config['research_task'],
        )

    @task
    def reporting_task(self) -> Task:
        return Task(
            config=self.tasks_config['reporting_task'],
            output_file='report.md'
        )

    @crew
    def crew(self) -> Crew:
        """Creates the Mornings crew"""
        return Crew(
            manager_agent=self.project_manager,
            agents=self.agents,
            tasks=self.tasks,
            # process=Process.sequential,
            process=Process.hierarchical,
            verbose=True,
        )

agents.yaml:

researcher:
  role: >
    {topic} Senior Data Researcher
  goal: >
    Uncover cutting-edge developments in {topic}
  backstory: >
    You're a seasoned researcher with a knack for uncovering the latest
    developments in {topic}. Known for your ability to find the most relevant
    information and present it in a clear and concise manner.

reporting_analyst:
  role: >
    {topic} Reporting Analyst
  goal: >
    Create detailed reports based on {topic} data analysis and research findings
  backstory: >
    You're a meticulous analyst with a keen eye for detail. You're known for
    your ability to turn complex data into clear and concise reports, making
    it easy for others to understand and act on the information you provide.

project_manager:
  role: >
    Project Manager
  goal: >
    Efficiently manage the crew and ensure high-quality task completion
  backstory: >
    You're an experienced project manager, skilled in overseeing complex
    projects and guiding teams to success.
    Your role is to coordinate the efforts of the crew members, ensuring that
    each task is completed on time and to the highest standard.

tasks.yaml:

research_task:
  description: >
    Conduct a thorough research about {topic}
    Make sure you find any interesting and relevant information given
    the current year is 2024.
  expected_output: >
    A list with 10 bullet points of the most relevant information about {topic}
  agent: researcher

reporting_task:
  description: >
    Review the context you got and expand each topic into a full section for a report.
    Make sure the report is detailed and contains any and all relevant information.
  expected_output: >
    A fully fledge reports with the mains topics, each with a full section of information.
    Formatted as markdown without '```'
  agent: reporting_analyst

Operating System

Ubuntu 22.04

Python Version

3.10

crewAI Version

0.70.1

crewAI Tools Version

0.12.1

Virtual Environment

Venv

Evidence

Output from running the crew with crewai run:

Running the Crew
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/username/development/ai/crewai/mornings/src/mornings/main.py", line 17, in run
    MorningsCrew().crew().kickoff(inputs=inputs)
  File "/home/username/development/ai/crewai/.venv/lib/python3.10/site-packages/crewai/project/annotations.py", line 124, in wrapper
    return func(self, *args, **kwargs)
  File "/home/username/development/ai/crewai/mornings/src/mornings/crew.py", line 62, in crew
    return Crew(
  File "/home/username/development/ai/crewai/.venv/lib/python3.10/site-packages/pydantic/main.py", line 212, in __init__
    validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
  File "/home/username/development/ai/crewai/.venv/lib/python3.10/site-packages/crewai/agents/agent_builder/base_agent.py", line 135, in process_model_config
    return process_config(values, cls)
  File "/home/username/development/ai/crewai/.venv/lib/python3.10/site-packages/crewai/utilities/config.py", line 19, in process_config
    config = values.get("config", {})
AttributeError: 'function' object has no attribute 'get'
An error occurred while running the crew: Command '['poetry', 'run', 'run_crew']' returned non-zero exit status 1.

Have also tried to add configuration of the manager directly in the crew.py file like this:

    @agent
    def project_manager(self) -> Agent:
        return Agent(
            # config=self.agents_config['project_manager'],
            role="Project Manager",
            goal="Efficiently manage the crew and ensure high-quality task completion",
            backstory="You're an experienced project manager, skilled in overseeing complex projects and guiding teams to success. Your role is to coordinate the efforts of the crew members, ensuring that each task is completed on time and to the highest standard.",
            llm=llm_host_model,
            allow_delegation=True,
            verbose=True
        )

Which also is not working.

Possible Solution

None

Additional context

If i run the crew without the manager_agentattribute and just uses manager_llm the crew runs, but without any manager agent configuration I guess?

This results in a lot of "errors" in the communications between the agents like:

## Tool Output:
Error: the Action Input is not a valid key, value dictionary.
 Error parsing LLM output, agent will retry: I did it wrong. Invalid Format: I missed the 'Action:' after 'Thought:'. I will do right next, and don't use a tool I have already used.

If you don't need to use any more tools, you must give your best complete final answer, make sure it satisfy the expect criteria, use the EXACT format below:

Thought: I now can give a great answer
Final Answer: my best complete final answer to the task.

And in the end the "final answer" is almost always useless fx.:

# Agent: Crew Manager

## Final Answer:
To expand on each topic, consider breaking down complex ideas into smaller, more manageable sections. Use clear and concise language, and include relevant examples or illustrations to support your points. Make sure to provide evidence-based information and cite relevant sources when necessary.
flingjie commented 6 days ago

You cannot directly pass self.project_manager as a parameter. In your case, you can try changing the code

@crew
    def crew(self) -> Crew:
        """Creates the Mornings crew"""
        return Crew(
            manager_agent=self.project_manager,
            agents=self.agents,
            tasks=self.tasks,
            # process=Process.sequential,
            process=Process.hierarchical,
            verbose=True,
        )

to the following

 @crew
    def crew(self) -> Crew:
        """Creates the Mornings crew"""
        return Crew(
            manager_agent=self.agents[-1],
            agents=self.agents[:-1],
            tasks=self.tasks,
            process=Process.hierarchical,
            verbose=True,
        )