run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
36.75k stars 5.27k forks source link

[Bug]: "Event loop is closed" when using LATS agent #16336

Open ArthurDelannoyazerty opened 1 month ago

ArthurDelannoyazerty commented 1 month ago

Bug Description

Following the tutorial on LATS agent (https://docs.llamaindex.ai/en/stable/examples/agent/lats_agent/) result in the RuntimeError : Event loop is closed

Version

0.11.12

Steps to Reproduce

Here is the relevant code :


model_name = "mistral-nemo:12b-instruct-2407-q6_K"

llm = Ollama(model=model_name, base_url=ollama_url, temperature=0)

embed_model = OllamaEmbedding(
    model_name=model_name,
    base_url=ollama_url,
    ollama_additional_kwargs={"mirostat": 0},
)

Settings.embed_model = embed_model

def add_numbers(a: int, b: int) -> int:
    return a + b

def multiply_numbers(a: int, b: int) -> int:
    return a * b

def capitalize_text(text: str) -> str:
    return text.upper()

function_tools = [
    FunctionTool.from_defaults(fn=add_numbers),
    FunctionTool.from_defaults(fn=multiply_numbers),
    FunctionTool.from_defaults(fn=capitalize_text)]

message = "What is 3*5*8*(9+5) ?"

agent_worker = LATSAgentWorker.from_tools(tools=function_tools, chat_history=[ChatMessage(role=MessageRole.SYSTEM, content=system_prompt)] , llm=llm, verbose=True)
agent = agent_worker.as_agent()
response = agent.chat(message)

Relevant Logs/Tracbacks

> Selecting node to expand: Observation: What is 3*5*8*(9+5) ?
> Got candidates: ['Calculate the expression step by step.', 'Use the order of operations (PEMDAS/BODMAS).']
=== Calling Function ===
Calling function: multiply_numbers with args: {"a": 3, "b": 5}
=== Function Output ===
15
=== Calling Function ===
Calling function: multiply_numbers with args: {"a": 3, "b": 5}
=== Function Output ===
15
> Generated new reasoning step: Thought: The user has asked for a calculation, which involves multiplication and addition. I'll use the `multiply_numbers` tool to perform these operations.
Action: multiply_numbers
Action Input: {'a': 3, 'b': 5}
Observation: 15
> Generated new reasoning step: Thought: The user has asked for a calculation that involves multiplication, addition, and parentheses. I'll use the multiply_numbers tool to handle the multiplication part first, then use add_numbers for the addition inside the parentheses.
Action: multiply_numbers
Action Input: {'a': 3, 'b': 5}
Observation: 15
> Evaluation for input What is 3*5*8*(9+5) ?
: score=9 is_done=False reasoning='The conversation is almost correct. The user asked for a calculation and the assistant started to calculate it step by step. However, the answer is not found yet because only the first multiplication was performed.'

> Evaluation for input What is 3*5*8*(9+5) ?
: score=9 is_done=False reasoning="The model has correctly identified that multiplication and addition are involved in the calculation. It has also correctly applied the order of operations by performing the multiplication first, using the multiply_numbers tool with the correct inputs (3 and 5). However, it hasn't yet handled the addition inside the parentheses or multiplied the result by 8."

> Got final response: I am still thinking.
> Selecting node to expand: Observation: 15
Traceback (most recent call last):
  File "/data/adelannoy/llm-battle/src/sandbox/llm__text_to_api_stac.py", line 132, in <module>
    response = agent.chat(message)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/instrumentation/dispatcher.py", line 265, in wrapper
    result = func(*args, **kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/callbacks/utils.py", line 41, in wrapper
    return func(self, *args, **kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/agent/runner/base.py", line 647, in chat
    chat_response = self._chat(
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/instrumentation/dispatcher.py", line 265, in wrapper
    result = func(*args, **kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/agent/runner/base.py", line 579, in _chat
    cur_step_output = self._run_step(
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/instrumentation/dispatcher.py", line 265, in wrapper
    result = func(*args, **kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/agent/runner/base.py", line 412, in _run_step
    cur_step_output = self.agent_worker.run_step(step, task, **kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/instrumentation/dispatcher.py", line 265, in wrapper
    result = func(*args, **kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/callbacks/utils.py", line 41, in wrapper
    return func(self, *args, **kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/agent/custom/simple.py", line 210, in run_step
    agent_response, is_done = self._run_step(
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/instrumentation/dispatcher.py", line 265, in wrapper
    result = func(*args, **kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/agent/lats/step.py", line 222, in _run_step
    return asyncio.run(self._arun_step(state, task, input))
  File "/usr/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/agent/lats/step.py", line 241, in _arun_step
    new_candidates = await self._get_next_candidates(
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/agent/lats/step.py", line 175, in _get_next_candidates
    candidates = await self.llm.astructured_predict(
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/instrumentation/dispatcher.py", line 297, in async_wrapper
    result = await func(*args, **kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/llms/llm.py", line 417, in astructured_predict
    result = await program.acall(**prompt_args)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/program/function_program.py", line 223, in acall
    agent_response = await self._llm.apredict_and_call(
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/instrumentation/dispatcher.py", line 297, in async_wrapper
    result = await func(*args, **kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/llms/function_calling.py", line 239, in apredict_and_call
    response = await self.achat_with_tools(
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/llms/function_calling.py", line 74, in achat_with_tools
    response = await self.achat(**chat_kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/instrumentation/dispatcher.py", line 297, in async_wrapper
    result = await func(*args, **kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/llms/callbacks.py", line 75, in wrapped_async_llm_chat
    f_return_val = await f(_self, messages, **kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/llms/ollama/base.py", line 401, in achat
    response = await self.async_client.chat(
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/ollama/_client.py", line 654, in chat
    return await self._request_stream(
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/ollama/_client.py", line 518, in _request_stream
    response = await self._request(*args, **kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/ollama/_client.py", line 483, in _request
    response = await self._client.request(method, url, **kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/httpx/_client.py", line 1585, in request
    return await self.send(request, auth=auth, follow_redirects=follow_redirects)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/httpx/_client.py", line 1674, in send
    response = await self._send_handling_auth(
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/httpx/_client.py", line 1702, in _send_handling_auth
    response = await self._send_handling_redirects(
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/httpx/_client.py", line 1739, in _send_handling_redirects
    response = await self._send_single_request(request)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/httpx/_client.py", line 1776, in _send_single_request
    response = await transport.handle_async_request(request)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/httpx/_transports/default.py", line 377, in handle_async_request
    resp = await self._pool.handle_async_request(req)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/httpcore/_async/connection_pool.py", line 216, in handle_async_request
    raise exc from None
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/httpcore/_async/connection_pool.py", line 196, in handle_async_request
    response = await connection.handle_async_request(
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/httpcore/_async/connection.py", line 101, in handle_async_request
    return await self._connection.handle_async_request(request)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/httpcore/_async/http11.py", line 142, in handle_async_request
    await self._response_closed()
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/httpcore/_async/http11.py", line 257, in _response_closed
    await self.aclose()
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/httpcore/_async/http11.py", line 265, in aclose
    await self._network_stream.aclose()
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/httpcore/_backends/anyio.py", line 55, in aclose
    await self._stream.aclose()
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 1208, in aclose
    self._transport.close()
  File "/usr/lib/python3.8/asyncio/selector_events.py", line 692, in close
    self._loop.call_soon(self._call_connection_lost, None)
  File "/usr/lib/python3.8/asyncio/base_events.py", line 719, in call_soon
    self._check_closed()
  File "/usr/lib/python3.8/asyncio/base_events.py", line 508, in _check_closed
    raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
dosubot[bot] commented 1 month ago

Hey @ArthurDelannoyazerty! I'm here to assist you with any bugs, questions, or contributions you might have. Let's tackle this issue together!

The "Event loop is closed" error typically occurs when trying to run asynchronous code after the event loop has been terminated. This can happen if the event loop is not properly managed or if it is closed prematurely by the application or framework you are using.

To address this issue, you can ensure that the event loop is correctly initialized and managed. Here are some steps you can take:

  1. Check Event Loop Initialization: Make sure that the event loop is properly initialized before running any asynchronous tasks. You can create a new event loop and set it as the current event loop using the following code:

    import asyncio
    
    # Create a new event loop
    loop = asyncio.new_event_loop()
    
    # Set the event loop as the current event loop
    asyncio.set_event_loop(loop)
  2. Use asyncio.run(): If you are running a main asynchronous function, consider using asyncio.run() to ensure that the event loop is properly managed. This function handles the creation and closure of the event loop automatically.

  3. Avoid Nested Event Loops: If you are working in an environment like Jupyter notebooks, where an event loop might already be running, you can use nest_asyncio.apply() to allow nested event loops:

    import nest_asyncio
    nest_asyncio.apply()
  4. Ensure Proper Closure: Make sure that the event loop is not being closed prematurely by any part of your code or by external libraries.

By following these steps, you should be able to manage the event loop lifecycle more effectively and avoid the "Event loop is closed" error.

To continue talking to Dosu, mention @dosu.

logan-markewich commented 1 month ago

Usually this error is because some other exception happened, and asyncio swallowed it (as it tends to do)

Probably your LLM output something that wasn't parsable, which is pretty common with open-source LLMs

ArthurDelannoyazerty commented 1 month ago

Weird because this always happens at the beginning of the second steps. If the LLM I use returned something that cannot be parsed, it could happens at any time (the first thought, or the third), not always the second.

I have a way to solve this problem by replacing

response = agent.chat(message)

by :

import asyncio
async def chat_async(message):
    response = await agent.achat(message)
    return response
response = asyncio.run(chat_async(message))

Or :

import asyncio
async def chat_async(message):
    response = await agent.achat(message)
    return response
loop = asyncio.get_event_loop()
loop.run_until_complete(chat_async(message))

Which gives the following logs :

> Selecting node to expand: Observation: What is 3*5*8*(9+5) ?
> Got candidates: ['Calculate the expression step by step.', 'Use the order of operations (PEMDAS/BODMAS).']
=== Calling Function ===
Calling function: multiply_numbers with args: {"a": 3, "b": 5}
=== Function Output ===
15
=== Calling Function ===
Calling function: multiply_numbers with args: {"a": 3, "b": 5}
=== Function Output ===
15
> Generated new reasoning step: Thought: The user has asked for a calculation, which involves multiplication and addition. I'll use the `multiply_numbers` tool to perform these operations.
Action: multiply_numbers
Action Input: {'a': 3, 'b': 5}
Observation: 15
> Generated new reasoning step: Thought: The user has asked for a calculation using multiplication, addition, and parentheses. I'll use the multiply_numbers tool to handle the multiplication part first.
Action: multiply_numbers
Action Input: {'a': 3, 'b': 5}
Observation: 15
> Evaluation for input What is 3*5*8*(9+5) ?
: score=8 is_done=False reasoning='The conversation is mostly correct so far. The user asked for a calculation and the assistant started performing it correctly by multiplying 3 and 5. However, the conversation is not complete yet as all parts of the expression have not been calculated.'

> Evaluation for input What is 3*5*8*(9+5) ?
: score=8 is_done=False reasoning='The conversation has started correctly by identifying the operation to be performed and applying PEMDAS/BODMAS rule. The action taken is correct as multiplication should be performed first. However, the answer is not yet found as only the first part of the calculation (3*5) has been completed. The score is 8 out of 10 because while the approach is correct, the answer is not yet complete.'

> Got final response: I am still thinking.
> Selecting node to expand: Observation: 15
> Got candidates: ['Action: multiply_numbers', 'Action: add_numbers']
=== Calling Function ===
Calling function: multiply_numbers with args: {"a": 3, "b": 5}
=== Function Output ===
15
=== Calling Function ===
Calling function: multiply_numbers with args: {"a": 15, "b": 2}
=== Function Output ===
30
> Generated new reasoning step: Thought: The user has asked for a calculation, which involves multiplication. I'll use the `multiply_numbers` tool to perform this operation.
Action: multiply_numbers
Action Input: {'a': 3, 'b': 5}
Observation: 15
> Generated new reasoning step: Thought: The user has given a number, but I need to perform an operation with it. Since there are no other numbers provided, I'll assume they want to know if this number is even or odd.
Action: multiply_numbers
Action Input: {'a': 15, 'b': 2}
Observation: 30
> Evaluation for input What is 3*5*8*(9+5) ?
: score=8 is_done=False reasoning="The model has correctly identified the task as a calculation and started performing the operations step by step. However, it repeated the multiplication of 3 and 5 instead of moving to the next operation. This is why I've given an 8 for correctness. The answer is not found yet, so completeness is False."

> Evaluation for input What is 3*5*8*(9+5) ?
: score=7 is_done=False reasoning='The conversation so far has been mostly correct. The user asked for a calculation and the assistant started performing it correctly by multiplying 3 and 5. However, the correctness score is not 10 because the assistant made an error in the order of operations. Instead of continuing with the multiplication (multiplying 8 with the result of 3*5), the assistant multiplied 15 with 2. This is incorrect as it does not follow the correct mathematical precedence. The completeness is false as the final answer has not been calculated yet.'

> Got final response: I am still thinking.
> Selecting node to expand: Observation: 15
> Got candidates: ['Action: multiply_numbers', 'Action: add_numbers']
=== Calling Function ===
Calling function: multiply_numbers with args: {"a": 3, "b": 5}
=== Function Output ===
15
=== Calling Function ===
Calling function: add_numbers with args: {"a": 1, "b": 4}
=== Function Output ===
5
> Generated new reasoning step: Thought: The user has asked for a multiplication operation. I'll use the multiply_numbers tool to handle this.
Action: multiply_numbers
Action Input: {'a': 3, 'b': 5}
Observation: 15
> Generated new reasoning step: Thought: The user has asked for a calculation using addition. I'll use the add_numbers tool to handle this.
Action: add_numbers
Action Input: {'a': 1, 'b': 4}
Observation: 5
> Evaluation for input What is 3*5*8*(9+5) ?
: score=8 is_done=False reasoning="The conversation has started with a clear query about a mathematical calculation involving multiplication and addition. The assistant correctly identified the need to use the order of operations (PEMDAS/BODMAS) and initiated the process by performing the first multiplication operation using the 'multiply_numbers' tool. However, the action was repeated without moving on to the next step in the calculation, which is incorrect. Therefore, I've given a score of 8 out of 10 for correctness. The answer is not found yet, so completeness is False."

> Evaluation for input What is 3*5*8*(9+5) ?
: score=9 is_done=False reasoning="The conversation so far is almost correct. The order of operations (PEMDAS/BODMAS) was correctly applied by first performing the multiplication inside the parentheses, and then proceeding with addition. However, there seems to be a mistake in the action input for 'add_numbers'. Instead of {'a': 1, 'b': 4}, it should have been {'a': 15, 'b': 4} to reflect the correct result from the previous multiplication step. This error led to an incorrect final observation of 5 instead of the expected 60. Despite this mistake, the thought process and approach towards solving the problem were largely correct."

> Got final response: I am still thinking.
> Selecting node to expand: Observation: 5
> Got candidates: ['multiply_numbers(15, 8)', 'add_numbers(15, 36)']
=== Calling Function ===
Calling function: multiply_numbers with args: {"a": 15, "b": 8}
=== Function Output ===
120
=== Calling Function ===
Calling function: add_numbers with args: {"a": 15, "b": 36}
=== Function Output ===
51
> Generated new reasoning step: Thought: The user has asked for a calculation using multiplication. I'll use the multiply_numbers tool to handle this.
Action: multiply_numbers
Action Input: {'a': 15, 'b': 8}
Observation: 120
> Generated new reasoning step: Thought: The user has asked for a calculation using addition. I'll use the add_numbers tool to handle this.
Action: add_numbers
Action Input: {'a': 15, 'b': 36}
Observation: 51
> Evaluation for input What is 3*5*8*(9+5) ?
: score=9 is_done=False reasoning='The conversation so far is almost correct. The order of operations was correctly applied and the calculations were performed accurately. However, the final answer has not been found yet as the addition inside the parentheses was not completed before multiplying by 8. This is a minor oversight in the sequence of actions. Therefore, the score is 9 out of 10.'

> Evaluation for input What is 3*5*8*(9+5) ?
: score=9 is_done=False reasoning="The conversation so far is almost correct. The order of operations was correctly applied and the multiplication part was handled accurately. However, the addition part seems to be incorrect as it should be (9+5) not 1+4. This error might have occurred due to misunderstanding or misinterpretation of the user's query. Therefore, the correctness score is 9 out of 10."

> Got final response: I am still thinking.
> Selecting node to expand: Observation: 120
Traceback (most recent call last):
  File "/data/adelannoy/llm-battle/src/sandbox/llm__text_to_api_stac.py", line 140, in <module>
    loop.run_until_complete(chat_async(message))
  File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
    return future.result()
  File "/data/adelannoy/llm-battle/src/sandbox/llm__text_to_api_stac.py", line 137, in chat_async
    response = await agent.achat(message)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/instrumentation/dispatcher.py", line 297, in async_wrapper
    result = await func(*args, **kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/callbacks/utils.py", line 56, in async_wrapper
    return await func(self, *args, **kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/agent/runner/base.py", line 672, in achat
    chat_response = await self._achat(
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/instrumentation/dispatcher.py", line 297, in async_wrapper
    result = await func(*args, **kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/agent/runner/base.py", line 614, in _achat
    cur_step_output = await self._arun_step(
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/instrumentation/dispatcher.py", line 297, in async_wrapper
    result = await func(*args, **kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/agent/runner/base.py", line 453, in _arun_step
    cur_step_output = await self.agent_worker.arun_step(step, task, **kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/instrumentation/dispatcher.py", line 297, in async_wrapper
    result = await func(*args, **kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/callbacks/utils.py", line 56, in async_wrapper
    return await func(self, *args, **kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/agent/custom/simple.py", line 223, in arun_step
    agent_response, is_done = await self._arun_step(
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/agent/lats/step.py", line 241, in _arun_step
    new_candidates = await self._get_next_candidates(
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/agent/lats/step.py", line 175, in _get_next_candidates
    candidates = await self.llm.astructured_predict(
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/instrumentation/dispatcher.py", line 297, in async_wrapper
    result = await func(*args, **kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/llms/llm.py", line 417, in astructured_predict
    result = await program.acall(**prompt_args)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/program/function_program.py", line 223, in acall
    agent_response = await self._llm.apredict_and_call(
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/instrumentation/dispatcher.py", line 297, in async_wrapper
    result = await func(*args, **kwargs)
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/llms/function_calling.py", line 248, in apredict_and_call
    tool_calls = self.get_tool_calls_from_response(
  File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/llms/ollama/base.py", line 254, in get_tool_calls_from_response
    raise ValueError(
ValueError: Expected at least one tool call, but got 0 tool calls.

The model hallucinate numbers, but its still better than before. ALSO, I have now this error : ValueError: Expected at least one tool call, but got 0 tool calls. @dosu ?