crewAIInc / crewAI

Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
https://crewai.com
MIT License
18.5k stars 2.54k forks source link

Issue running main exemple code #193

Open jmbenedetto opened 6 months ago

jmbenedetto commented 6 months ago

Hello. I hope you are doing fine.

I tried to run the main example code from readme.md but got an error message: Cannot run the event loop while another loop is running.

I'm using conda environment created for this experiment using python 3.9 in VSCode.

Please see the entire error message below:


[DEBUG]: Working Agent: Senior Research Analyst

[INFO]: Starting Task: Conduct a comprehensive analysis of the latest advancements in AI in 2024.
  Identify key trends, breakthrough technologies, and potential industry impacts.
  Your final answer MUST be a full analysis report

> Entering new CrewAgentExecutor chain...
Thought: To conduct a comprehensive analysis of the latest advancements in AI in 2024, I need to search for the most recent and relevant information available. Therefore, I need to use a tool.

Action: duckduckgo_search
Action Input: Latest advancements in AI in 2024

{
    "name": "RuntimeError",
    "message": "Cannot run the event loop while another loop is running",
    "stack": "---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
File /opt/homebrew/Caskroom/mambaforge/base/envs/llm_env/lib/python3.10/site-packages/langchain_community/utilities/duckduckgo_search.py:57, in DuckDuckGoSearchAPIWrapper._ddgs_text(self, query, max_results)
     56     if ddgs_gen:
---> 57         return [r for r in ddgs_gen]
     58 return []

File /opt/homebrew/Caskroom/mambaforge/base/envs/llm_env/lib/python3.10/site-packages/langchain_community/utilities/duckduckgo_search.py:57, in <listcomp>(.0)
     56     if ddgs_gen:
---> 57         return [r for r in ddgs_gen]
     58 return []

File /opt/homebrew/Caskroom/mambaforge/base/envs/llm_env/lib/python3.10/site-packages/duckduckgo_search/duckduckgo_search.py:27, in DDGS._iter_over_async(self, ait)
     26 try:
---> 27     obj = self._loop.run_until_complete(get_next())
     28     yield obj

File /opt/homebrew/Caskroom/mambaforge/base/envs/llm_env/lib/python3.10/asyncio/base_events.py:625, in BaseEventLoop.run_until_complete(self, future)
    624 self._check_closed()
--> 625 self._check_running()
    627 new_task = not futures.isfuture(future)

File /opt/homebrew/Caskroom/mambaforge/base/envs/llm_env/lib/python3.10/asyncio/base_events.py:586, in BaseEventLoop._check_running(self)
    585 if events._get_running_loop() is not None:
--> 586     raise RuntimeError(
    587         'Cannot run the event loop while another loop is running')

RuntimeError: Cannot run the event loop while another loop is running

During handling of the above exception, another exception occurred:

RuntimeError                              Traceback (most recent call last)
Cell In[7], line 72
     65 crew = Crew(
     66   agents=[researcher, writer],
     67   tasks=[task1, task2],
     68   verbose=2, # You can set it to 1 or 2 to different logging levels
     69 )
     71 # Get your crew to work!
---> 72 result = crew.kickoff()
     74 print(\"######################\")
     75 print(result)

File /opt/homebrew/Caskroom/mambaforge/base/envs/llm_env/lib/python3.10/site-packages/crewai/crew.py:127, in Crew.kickoff(self)
    124     agent.i18n = I18N(language=self.language)
    126 if self.process == Process.sequential:
--> 127     return self._sequential_loop()

File /opt/homebrew/Caskroom/mambaforge/base/envs/llm_env/lib/python3.10/site-packages/crewai/crew.py:134, in Crew._sequential_loop(self)
    132 for task in self.tasks:
    133     self._prepare_and_execute_task(task)
--> 134     task_output = task.execute(task_output)
    135     self._logger.log(
    136         \"debug\", f\"[{task.agent.role}] Task output: {task_output}\
\
\"
    137     )
    139 if self.max_rpm:

File /opt/homebrew/Caskroom/mambaforge/base/envs/llm_env/lib/python3.10/site-packages/crewai/task.py:56, in Task.execute(self, context)
     52 if not self.agent:
     53     raise Exception(
     54         f\"The task '{self.description}' has no agent assigned, therefore it can't be executed directly and should be executed in a Crew using a specific process that support that, either consensual or hierarchical.\"
     55     )
---> 56 result = self.agent.execute_task(
     57     task=self.description, context=context, tools=self.tools
     58 )
     60 self.output = TaskOutput(description=self.description, result=result)
     61 return result

File /opt/homebrew/Caskroom/mambaforge/base/envs/llm_env/lib/python3.10/site-packages/crewai/agent.py:146, in Agent.execute_task(self, task, context, tools)
    143 tools = tools or self.tools
    144 self.agent_executor.tools = tools
--> 146 result = self.agent_executor.invoke(
    147     {
    148         \"input\": task,
    149         \"tool_names\": self.__tools_names(tools),
    150         \"tools\": render_text_description(tools),
    151     },
    152     RunnableConfig(callbacks=[self.tools_handler]),
    153 )[\"output\"]
    155 if self.max_rpm:
    156     self._rpm_controller.stop_rpm_counter()

File /opt/homebrew/Caskroom/mambaforge/base/envs/llm_env/lib/python3.10/site-packages/langchain/chains/base.py:162, in Chain.invoke(self, input, config, **kwargs)
    160 except BaseException as e:
    161     run_manager.on_chain_error(e)
--> 162     raise e
    163 run_manager.on_chain_end(outputs)
    164 final_outputs: Dict[str, Any] = self.prep_outputs(
    165     inputs, outputs, return_only_outputs
    166 )

File /opt/homebrew/Caskroom/mambaforge/base/envs/llm_env/lib/python3.10/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs)
    149 run_manager = callback_manager.on_chain_start(
    150     dumpd(self),
    151     inputs,
    152     name=run_name,
    153 )
    154 try:
    155     outputs = (
--> 156         self._call(inputs, run_manager=run_manager)
    157         if new_arg_supported
    158         else self._call(inputs)
    159     )
    160 except BaseException as e:
    161     run_manager.on_chain_error(e)

File /opt/homebrew/Caskroom/mambaforge/base/envs/llm_env/lib/python3.10/site-packages/crewai/agents/executor.py:59, in CrewAgentExecutor._call(self, inputs, run_manager)
     57 while self._should_continue(self.iterations, time_elapsed):
     58     if not self.request_within_rpm_limit or self.request_within_rpm_limit():
---> 59         next_step_output = self._take_next_step(
     60             name_to_tool_map,
     61             color_mapping,
     62             inputs,
     63             intermediate_steps,
     64             run_manager=run_manager,
     65         )
     66         if isinstance(next_step_output, AgentFinish):
     67             return self._return(
     68                 next_step_output, intermediate_steps, run_manager=run_manager
     69             )

File /opt/homebrew/Caskroom/mambaforge/base/envs/llm_env/lib/python3.10/site-packages/langchain/agents/agent.py:1097, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
   1088 def _take_next_step(
   1089     self,
   1090     name_to_tool_map: Dict[str, BaseTool],
   (...)
   1094     run_manager: Optional[CallbackManagerForChainRun] = None,
   1095 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
   1096     return self._consume_next_step(
-> 1097         [
   1098             a
   1099             for a in self._iter_next_step(
   1100                 name_to_tool_map,
   1101                 color_mapping,
   1102                 inputs,
   1103                 intermediate_steps,
   1104                 run_manager,
   1105             )
   1106         ]
   1107     )

File /opt/homebrew/Caskroom/mambaforge/base/envs/llm_env/lib/python3.10/site-packages/langchain/agents/agent.py:1097, in <listcomp>(.0)
   1088 def _take_next_step(
   1089     self,
   1090     name_to_tool_map: Dict[str, BaseTool],
   (...)
   1094     run_manager: Optional[CallbackManagerForChainRun] = None,
   1095 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
   1096     return self._consume_next_step(
-> 1097         [
   1098             a
   1099             for a in self._iter_next_step(
   1100                 name_to_tool_map,
   1101                 color_mapping,
   1102                 inputs,
   1103                 intermediate_steps,
   1104                 run_manager,
   1105             )
   1106         ]
   1107     )

File /opt/homebrew/Caskroom/mambaforge/base/envs/llm_env/lib/python3.10/site-packages/crewai/agents/executor.py:191, in CrewAgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
    189         tool_run_kwargs[\"llm_prefix\"] = \"\"
    190     # We then call the tool on the tool input to get an observation
--> 191     observation = tool.run(
    192         agent_action.tool_input,
    193         verbose=self.verbose,
    194         color=color,
    195         callbacks=run_manager.get_child() if run_manager else None,
    196         **tool_run_kwargs,
    197     )
    198 else:
    199     tool_run_kwargs = self.agent.tool_run_logging_kwargs()

File /opt/homebrew/Caskroom/mambaforge/base/envs/llm_env/lib/python3.10/site-packages/langchain_core/tools.py:373, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
    371 except (Exception, KeyboardInterrupt) as e:
    372     run_manager.on_tool_error(e)
--> 373     raise e
    374 else:
    375     run_manager.on_tool_end(
    376         str(observation), color=color, name=self.name, **kwargs
    377     )

File /opt/homebrew/Caskroom/mambaforge/base/envs/llm_env/lib/python3.10/site-packages/langchain_core/tools.py:345, in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
    342     parsed_input = self._parse_input(tool_input)
    343     tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
    344     observation = (
--> 345         self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
    346         if new_arg_supported
    347         else self._run(*tool_args, **tool_kwargs)
    348     )
    349 except ToolException as e:
    350     if not self.handle_tool_error:

File /opt/homebrew/Caskroom/mambaforge/base/envs/llm_env/lib/python3.10/site-packages/langchain_community/tools/ddg_search/tool.py:39, in DuckDuckGoSearchRun._run(self, query, run_manager)
     33 def _run(
     34     self,
     35     query: str,
     36     run_manager: Optional[CallbackManagerForToolRun] = None,
     37 ) -> str:
     38     \"\"\"Use the tool.\"\"\"
---> 39     return self.api_wrapper.run(query)

File /opt/homebrew/Caskroom/mambaforge/base/envs/llm_env/lib/python3.10/site-packages/langchain_community/utilities/duckduckgo_search.py:81, in DuckDuckGoSearchAPIWrapper.run(self, query)
     79 \"\"\"Run query through DuckDuckGo and return concatenated results.\"\"\"
     80 if self.source == \"text\":
---> 81     results = self._ddgs_text(query)
     82 elif self.source == \"news\":
     83     results = self._ddgs_news(query)

File /opt/homebrew/Caskroom/mambaforge/base/envs/llm_env/lib/python3.10/site-packages/langchain_community/utilities/duckduckgo_search.py:47, in DuckDuckGoSearchAPIWrapper._ddgs_text(self, query, max_results)
     44 \"\"\"Run query through DuckDuckGo text search and return results.\"\"\"
     45 from duckduckgo_search import DDGS
---> 47 with DDGS() as ddgs:
     48     ddgs_gen = ddgs.text(
     49         query,
     50         region=self.region,
   (...)
     54         backend=self.backend,
     55     )
     56     if ddgs_gen:

File /opt/homebrew/Caskroom/mambaforge/base/envs/llm_env/lib/python3.10/site-packages/duckduckgo_search/duckduckgo_search.py:19, in DDGS.__exit__(self, exc_type, exc_val, exc_tb)
     18 def __exit__(self, exc_type, exc_val, exc_tb) -> None:
---> 19     self._loop.run_until_complete(self.__aexit__(exc_type, exc_val, exc_tb))

File /opt/homebrew/Caskroom/mambaforge/base/envs/llm_env/lib/python3.10/asyncio/base_events.py:625, in BaseEventLoop.run_until_complete(self, future)
    614 \"\"\"Run until the Future is done.
    615 
    616 If the argument is a coroutine, it is wrapped in a Task.
   (...)
    622 Return the Future's result, or raise its exception.
    623 \"\"\"
    624 self._check_closed()
--> 625 self._check_running()
    627 new_task = not futures.isfuture(future)
    628 future = tasks.ensure_future(future, loop=self)

File /opt/homebrew/Caskroom/mambaforge/base/envs/llm_env/lib/python3.10/asyncio/base_events.py:586, in BaseEventLoop._check_running(self)
    584     raise RuntimeError('This event loop is already running')
    585 if events._get_running_loop() is not None:
--> 586     raise RuntimeError(
    587         'Cannot run the event loop while another loop is running')

RuntimeError: Cannot run the event loop while another loop is running"
}

Could you please advise?

Thanks a lot.

thomasrnp commented 6 months ago

I have the very same issue. I am using Python 3.10.13.

tdj28 commented 6 months ago

Same problem [EDIT: but not with the example, but when I call the example as an async call, unable to replicate with the raw example from the README], quick fix is to pin your duckduckgo-search version to 4.2, looks like either duckduckgo-search introduced a bug in 4.3 (released within past 24 hours of this message), or crewAI/langchain needs to process the changes they introduced in version 4.3.

duckduckgo-search==4.2
tdj28 commented 6 months ago

This appears to be an issue with how users use async with langchain/crew:

Error does not occur here:

from langchain_community.tools import DuckDuckGoSearchRun

search = DuckDuckGoSearchRun()

print(search.run("Obama's first name?"))

does occur here:

from langchain_community.tools import DuckDuckGoSearchRun
import asyncio

async def main():
    search = DuckDuckGoSearchRun()

    print(search.run("Obama's first name?"))

asyncio.run(main())

If async needed, this works instead:

from langchain_community.tools import DuckDuckGoSearchRun
import asyncio

async def main():
    search = DuckDuckGoSearchRun()

    result = await asyncio.to_thread(search.run, "Obama's first name?")
    print(result)

asyncio.run(main())
tdj28 commented 6 months ago

Here is a crewai example that works with duckduckgo-search >= 4.3, where we need to call crew.kickoff with an await call:

import os
import asyncio
from crewai import Agent, Task, Crew, Process

#os.environ["OPENAI_API_KEY"] = "YOUR KEY"

# You can choose to use a local model through Ollama for example.
#
# from langchain.llms import Ollama
# ollama_llm = Ollama(model="openhermes")

# Install duckduckgo-search for this example:
# !pip install -U duckduckgo-search

from langchain_community.tools import DuckDuckGoSearchRun
search_tool = DuckDuckGoSearchRun()

async def main():

    # Define your agents with roles and goals
    researcher = Agent(
      role='Senior Research Analyst',
      goal='Uncover cutting-edge developments in AI and data science',
      backstory="""You work at a leading tech think tank.
      Your expertise lies in identifying emerging trends.
      You have a knack for dissecting complex data and presenting
      actionable insights.""",
      verbose=True,
      allow_delegation=False,
      tools=[search_tool]

    )
    writer = Agent(
      role='Tech Content Strategist',
      goal='Craft compelling content on tech advancements',
      backstory="""You are a renowned Content Strategist, known for
      your insightful and engaging articles.
      You transform complex concepts into compelling narratives.""",
      verbose=True,
      allow_delegation=True,
      # (optional) llm=ollama_llm
    )

    # Create tasks for your agents
    task1 = Task(
      description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
      Identify key trends, breakthrough technologies, and potential industry impacts.
      Your final answer MUST be a full analysis report""",
      agent=researcher
    )

    task2 = Task(
      description="""Using the insights provided, develop an engaging blog
      post that highlights the most significant AI advancements.
      Your post should be informative yet accessible, catering to a tech-savvy audience.
      Make it sound cool, avoid complex words so it doesn't sound like AI.
      Your final answer MUST be the full blog post of at least 4 paragraphs.""",
      agent=writer
    )

    # Instantiate your crew with a sequential process
    crew = Crew(
      agents=[researcher, writer],
      tasks=[task1, task2],
      verbose=2, # You can set it to 1 or 2 to different logging levels
    )

    # Get your crew to work!
    result = await asyncio.to_thread(crew.kickoff)

    print("######################")
    print(result)

asyncio.run(main())
deedy5 commented 6 months ago

fixed in duckduckgo_search v4.4 pip install -U duckduckgo_search

thomasrnp commented 6 months ago

@tdj28 + @deedy5 : thanks for picking up the issue. I have updated to 4.4. I do get warnings (e.g. duckduckgo_search.py:47: UserWarning: DDGS running in an async loop. This may cause errors. Use AsyncDDGS instead. with DDGS() as ddgs) but otherwise the demo script runs now.

github-actions[bot] commented 2 days ago

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.