crewAIInc / crewAI

Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
https://crewai.com
MIT License
20.9k stars 2.9k forks source link

Default LLM problem , is it the issue of v0.5.0 ? #222

Closed Adamchanadam closed 9 months ago

Adamchanadam commented 9 months ago

How come my end point still connecting to "OpenAI API" as the default LLM and Mistral AI API at the same time , even I've assigned llm=mistral_client ONLY to every Agents already ? May I know what's wrong in my code below ? Many thanks. šŸ™šŸ™šŸ™ (it spent me lot of token from "GPT-4" $$$ and exceed the quota limit .... )šŸ„²

I'm using crewAI v.0.5.0. (just upgraded today , i can use my Mistral ai api in previous version correctly by the similar code structure.)

import os
import requests
from crewai import Agent, Task, Crew, Process
from langchain_community.tools import DuckDuckGoSearchRun
from langchain.tools import tool
from crewai.tasks.task_output import TaskOutput
from langchain_mistralai.chat_models import ChatMistralAI

mistral_api_key = os.environ.get("MISTRAL_API_KEY")
model="mistral-small"
mistral_client = ChatMistralAI(api_key=mistral_api_key, model=model, temperature=0.5, top_p=0.1, max_retries=5, max_tokens=None)
print ("LLM model for this proejct:",model)

search_tool = DuckDuckGoSearchRun()

# Define the topic of interest
topic = 'Apple Vision Pro.'

# Define the manager agent
manager = Agent(
    role='Project Manager',
    goal='Coordinate the project to ensure a seamless integration of research findings into compelling narratives',
    verbose=True,
    backstory="""With a strategic mindset and a knack for leadership, you excel at guiding teams towards their goals, ensuring projects not only meet but exceed expectations.""",
    allow_delegation=True,
    max_iter=10,
    max_rpm=20,
    llm=mistral_client
)

# Define the senior researcher agent
researcher = Agent(
    role='Senior Researcher',
    goal=f'Uncover groundbreaking technologies around {topic} .',
    verbose=True,
    allow_delegation=False,
    backstory="""Driven by curiosity, you're at the forefront of innovation, eager to explore and share knowledge that could change the world.""",
    llm=mistral_client

)

# Define the writer agent
writer = Agent(
    role='Writer',
    goal=f'Narrate compelling tech stories around {topic}',
    verbose=True,
    allow_delegation=False,
    backstory="""With a flair for simplifying complex topics, you craft engaging narratives that captivate and educate, bringing new discoveries to light in an accessible manner.""",
    llm=mistral_client
)

# Define the asynchronous research tasks
list_ideas = Task(
    description=f'List of 5 interesting ideas to explore for an article about {topic}.',
    expected_output="Bullet point list of 5 ideas for an article.",
    tools=[search_tool], 
    agent=researcher,
    async_execution=True
)

list_important_history = Task(
    description=f'Research the history of {topic} and identify the 5 most important events , NEVER use the information before 2023.',
    expected_output="Bullet point list of 5 important events.",
    tools=[search_tool],
    agent=researcher,
    async_execution=True
)

# Define the writing task that waits for the outputs of the two research tasks
write_article = Task(
    description=f"Compose an insightful article on {topic}, including its history and the latest interesting ideas.",
    expected_output=f"A 4 paragraph article about {topic} in the market.",
    tools=[search_tool],  #, ContentTools().read_content
    agent=writer,
    context=[list_ideas, list_important_history],  # Depends on the completion of the two asynchronous tasks
    #callback=callback_function
)

# Define the manager's coordination task
manager_task = Task(
    description=f"""Oversee the integration of research findings and narrative development to produce a final comprehensive report on {topic}. Ensure the research is accurately represented and the narrative is engaging and informative.""",
    expected_output=f'A final comprehensive report that combines the research findings and narrative on {topic}.',
    agent=manager
)

# Forming the crew with a hierarchical process including the manager
crew = Crew(
    agents=[manager, researcher, writer],
    tasks=[list_ideas, list_important_history, write_article, manager_task],
    process=Process.hierarchical
)

# Kick off the crew's work
results = crew.kickoff()

# Print the results
print("Crew Work Results:")
print(results)

Part of the code executed outcome in Terminal :

Details

File "C:\Users\adam\anaconda3\envs\crewai_env\Lib\site-packages\openai\_base_client.py", line 947, in _request raise self._make_status_error_from_response(err.response) from None openai.RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

mariusgarmhausen commented 9 months ago

I don't have the solution, but it only happens in combination with the hierarchical setting in the Crew(...., process=Process.hierarchical). Commenting the process attribute out, runs it, but obviously not as intended.

Gunther987 commented 9 months ago

It appears that in crew.py on line 184, the manager agent is not retrieving the specified llm as expected. It defaults to using the predefined OpenAI model.

Wwilcz2 commented 9 months ago

Can confirm - defaults to OpenAI regardless of what is passed in the "llm" parameter.

clearsitedesigns commented 9 months ago

Can confirm this is presents itself when ever you add a custom task, or trying to do a different llm. Does this with llm studio and with ollama, not just mistral

ciaotesla commented 9 months ago

Hello, I am getting this errors (process=Process.hierarchical) as looks like the same that you are getting:

results = crew.kickoff() ^^^^^^^^^^^^^^ File "/home/pi4/.local/lib/python3.11/site-packages/crewai/crew.py", line 162, in kickoff return self._run_hierarchical_process() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/pi4/.local/lib/python3.11/site-packages/crewai/crew.py", line 198, in _run_hierarchical_process manager = Agent( ^^^^^^ File "/home/pi4/.local/lib/python3.11/site-packages/pydantic/main.py", line 171, in __init__ self.__pydantic_validator__.validate_python(data, self_instance=self) File "/home/pi4/.local/lib/python3.11/site-packages/crewai/agent.py", line 95, in <lambda> default_factory=lambda: ChatOpenAI( ^^^^^^^^^^^ File "/home/pi4/.local/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 107, in __init__ super().__init__(**kwargs) File "/home/pi4/.local/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__ raise validation_error pydantic.v1.error_wrappers.ValidationError: 1 validation error for ChatOpenAI __root__ Did not find openai_api_key, please add an environment variableOPENAI_API_KEYwhich contains it, or passopenai_api_keyas a named parameter. (type=value_error)

Also notices that in Crew() that manager_llm=llm_ollama still points to OpenAI

Adamchanadam commented 9 months ago

Thanks , a lot ! šŸ„³šŸ„³šŸ„³ problem solved by upgrading to v.0.5.3 and assigning my mistral client to 'manager_llm'.

adding

crew = Crew(
    agents=[manager, researcher, writer],
    tasks=[list_ideas, list_important_history, write_article, manager_task],
    process=Process.hierarchical,
    manager_llm=mistral_client #new in v.0.5.3
)
ciaotesla commented 9 months ago

I can confirm it's working, I am using tinyllama on PI4

joaomdmoura commented 9 months ago

Thanks everyone for helping answer this, it was a bug on 0.5.0 but we fixed and now 0.5.3 should work just fine if you specific the manager_llm attribute <3 sorry and thanks for bearing with me

gdagitrep commented 7 months ago

@joaomdmoura Shouldn't we instead pass a manager agent? I see @Adamchanadam created a manager agent, and provided an llm to it too. But seems like CrewAI is creating a manager agent by itself using predefined prompt.