Closed Adamchanadam closed 9 months ago
I don't have the solution, but it only happens in combination with the hierarchical setting in the Crew(...., process=Process.hierarchical). Commenting the process attribute out, runs it, but obviously not as intended.
It appears that in crew.py on line 184, the manager agent is not retrieving the specified llm as expected. It defaults to using the predefined OpenAI model.
Can confirm - defaults to OpenAI regardless of what is passed in the "llm" parameter.
Can confirm this is presents itself when ever you add a custom task, or trying to do a different llm. Does this with llm studio and with ollama, not just mistral
Hello, I am getting this errors (process=Process.hierarchical) as looks like the same that you are getting:
results = crew.kickoff() ^^^^^^^^^^^^^^ File "/home/pi4/.local/lib/python3.11/site-packages/crewai/crew.py", line 162, in kickoff return self._run_hierarchical_process() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/pi4/.local/lib/python3.11/site-packages/crewai/crew.py", line 198, in _run_hierarchical_process manager = Agent( ^^^^^^ File "/home/pi4/.local/lib/python3.11/site-packages/pydantic/main.py", line 171, in __init__ self.__pydantic_validator__.validate_python(data, self_instance=self) File "/home/pi4/.local/lib/python3.11/site-packages/crewai/agent.py", line 95, in <lambda> default_factory=lambda: ChatOpenAI( ^^^^^^^^^^^ File "/home/pi4/.local/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 107, in __init__ super().__init__(**kwargs) File "/home/pi4/.local/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__ raise validation_error pydantic.v1.error_wrappers.ValidationError: 1 validation error for ChatOpenAI __root__ Did not find openai_api_key, please add an environment variable
OPENAI_API_KEYwhich contains it, or pass
openai_api_keyas a named parameter. (type=value_error)
Also notices that in Crew() that manager_llm=llm_ollama still points to OpenAI
Thanks , a lot ! š„³š„³š„³ problem solved by upgrading to v.0.5.3 and assigning my mistral client to 'manager_llm'.
adding
crew = Crew(
agents=[manager, researcher, writer],
tasks=[list_ideas, list_important_history, write_article, manager_task],
process=Process.hierarchical,
manager_llm=mistral_client #new in v.0.5.3
)
I can confirm it's working, I am using tinyllama on PI4
Thanks everyone for helping answer this, it was a bug on 0.5.0 but we fixed and now 0.5.3 should work just fine if you specific the manager_llm
attribute <3 sorry and thanks for bearing with me
@joaomdmoura Shouldn't we instead pass a manager agent? I see @Adamchanadam created a manager agent, and provided an llm to it too. But seems like CrewAI is creating a manager agent by itself using predefined prompt.
How come my end point still connecting to "OpenAI API" as the default LLM and Mistral AI API at the same time , even I've assigned llm=mistral_client ONLY to every Agents already ? May I know what's wrong in my code below ? Many thanks. ššš (it spent me lot of token from "GPT-4" $$$ and exceed the quota limit .... )š„²
I'm using crewAI v.0.5.0. (just upgraded today , i can use my Mistral ai api in previous version correctly by the similar code structure.)
Part of the code executed outcome in Terminal :
Details
File "C:\Users\adam\anaconda3\envs\crewai_env\Lib\site-packages\openai\_base_client.py", line 947, in _request raise self._make_status_error_from_response(err.response) from None openai.RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}