joaomdmoura / crewAI

Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
https://crewai.com
MIT License
17.11k stars 2.32k forks source link

Separating Each Agents LLM and Manager LLM while using hierarchy process to as to fix quota issue affecting Crew result #692

Open Yembot31013 opened 1 month ago

Yembot31013 commented 1 month ago

I am using google gemini model free plan

In the context of avoiding errors like:

2024-05-26 18:26:30,636 - 14200 - before_sleep.py-before_sleep:65 - WARNING: Retrying langchain_google_genai.chat_models._chat_with_retry.<locals>._chat_with_retry in 8.0 seconds as it raised ResourceExhausted: 429 Resource has been exhausted (e.g. check quota)..

I will suggest if you can allow us to isolate LLM from each Agent and also agent_manager when using a process of hierarchical. This is actually not making me achieve anything using hierarchical process

noggynoggy commented 1 month ago

You can define your own Manager Agent since #474, with that you should be able to specify separate llms

Yembot31013 commented 1 month ago

Manager Agent, like, how? I want each Agents to have a separate LLM and not a single LLM working for all of them. I know I can assign a manager LLM to my crew when I am using the hierarchy process, but my concern is that, when the manager LLM delegated task to an Agent, will it use the LLM defined in each of the Agent class or use the manager LLM to answer the question and just use the Agent tools. If No, I will suggest they make it separatable since I am having issue with ResourceExhausted while using gemini so that I can use different API key for each of my Agents and manager LLM separately, but if Yes, I suggest if you can add a new functionality that if a specific or obvious exception error is found, then it should wait for a specific seconds before retrying or maybe exit the whole code execution because some result might be necessary for the perfection of the Crew result. This should be dynamic maybe adding an argument like error_handler_config: List[dict]. where the dict will contain:

{
 "exception": list[an Exception class] #contain list to exception class to catch,
 "callback": method function from a Buildin Class #A buildin method that either wait based on custom/default seconds, exit the own program, run  a custom function, e.t.c (what ever you want)

}

something like this. Thanks

noggynoggy commented 1 month ago
  1. Please use commas
  2. There is no "they". CrewAI is a community project so it's "we"
  3. If you want something changed you should try to implement yourself if you can, and the create a PR

Every Agent Object including the Manager already has an llm parameter you can set independently.

https://docs.crewai.com/core-concepts/Agents/

agent = Agent(
  role='Data Analyst',
  goal='Extract actionable insights',
  backstory="...",
  llm=my_llm,  # here
)

Also you might want to look into asynchronous task execution depending on what you want to achieve

https://docs.crewai.com/core-concepts/Tasks/#asynchronous-execution

settur1409 commented 1 month ago

Hello everyone, just curious to understand. if I want to implement an agent without LLM, is this possible by using crewAI? from the documentation, image

which seems, to be, if I want to create an agent that purely needs to query data without LLM, is this possible from crewAI?

Kindly correct me, if I am missing something.

greg80303 commented 3 weeks ago

@settur1409 If you don't provide an LLM, it will use OpenAI gpt-4 by default (subject to the OPENAI_MODEL_NAME env var):

        llm: Any = Field(
            default_factory=lambda: ChatOpenAI(
                model=os.environ.get("OPENAI_MODEL_NAME", "gpt-4")
            ),
            description="Language model that will run the agent.",
    )