Open ArthurDelannoyazerty opened 1 month ago
Hey @ArthurDelannoyazerty! I'm here to assist you with any bugs, questions, or contributions you might have. Let's tackle this issue together!
The "Event loop is closed" error typically occurs when trying to run asynchronous code after the event loop has been terminated. This can happen if the event loop is not properly managed or if it is closed prematurely by the application or framework you are using.
To address this issue, you can ensure that the event loop is correctly initialized and managed. Here are some steps you can take:
Check Event Loop Initialization: Make sure that the event loop is properly initialized before running any asynchronous tasks. You can create a new event loop and set it as the current event loop using the following code:
import asyncio
# Create a new event loop
loop = asyncio.new_event_loop()
# Set the event loop as the current event loop
asyncio.set_event_loop(loop)
Use asyncio.run()
: If you are running a main asynchronous function, consider using asyncio.run()
to ensure that the event loop is properly managed. This function handles the creation and closure of the event loop automatically.
Avoid Nested Event Loops: If you are working in an environment like Jupyter notebooks, where an event loop might already be running, you can use nest_asyncio.apply()
to allow nested event loops:
import nest_asyncio
nest_asyncio.apply()
Ensure Proper Closure: Make sure that the event loop is not being closed prematurely by any part of your code or by external libraries.
By following these steps, you should be able to manage the event loop lifecycle more effectively and avoid the "Event loop is closed" error.
To continue talking to Dosu, mention @dosu.
Usually this error is because some other exception happened, and asyncio swallowed it (as it tends to do)
Probably your LLM output something that wasn't parsable, which is pretty common with open-source LLMs
Weird because this always happens at the beginning of the second steps. If the LLM I use returned something that cannot be parsed, it could happens at any time (the first thought, or the third), not always the second.
I have a way to solve this problem by replacing
response = agent.chat(message)
by :
import asyncio
async def chat_async(message):
response = await agent.achat(message)
return response
response = asyncio.run(chat_async(message))
Or :
import asyncio
async def chat_async(message):
response = await agent.achat(message)
return response
loop = asyncio.get_event_loop()
loop.run_until_complete(chat_async(message))
Which gives the following logs :
> Selecting node to expand: Observation: What is 3*5*8*(9+5) ?
> Got candidates: ['Calculate the expression step by step.', 'Use the order of operations (PEMDAS/BODMAS).']
=== Calling Function ===
Calling function: multiply_numbers with args: {"a": 3, "b": 5}
=== Function Output ===
15
=== Calling Function ===
Calling function: multiply_numbers with args: {"a": 3, "b": 5}
=== Function Output ===
15
> Generated new reasoning step: Thought: The user has asked for a calculation, which involves multiplication and addition. I'll use the `multiply_numbers` tool to perform these operations.
Action: multiply_numbers
Action Input: {'a': 3, 'b': 5}
Observation: 15
> Generated new reasoning step: Thought: The user has asked for a calculation using multiplication, addition, and parentheses. I'll use the multiply_numbers tool to handle the multiplication part first.
Action: multiply_numbers
Action Input: {'a': 3, 'b': 5}
Observation: 15
> Evaluation for input What is 3*5*8*(9+5) ?
: score=8 is_done=False reasoning='The conversation is mostly correct so far. The user asked for a calculation and the assistant started performing it correctly by multiplying 3 and 5. However, the conversation is not complete yet as all parts of the expression have not been calculated.'
> Evaluation for input What is 3*5*8*(9+5) ?
: score=8 is_done=False reasoning='The conversation has started correctly by identifying the operation to be performed and applying PEMDAS/BODMAS rule. The action taken is correct as multiplication should be performed first. However, the answer is not yet found as only the first part of the calculation (3*5) has been completed. The score is 8 out of 10 because while the approach is correct, the answer is not yet complete.'
> Got final response: I am still thinking.
> Selecting node to expand: Observation: 15
> Got candidates: ['Action: multiply_numbers', 'Action: add_numbers']
=== Calling Function ===
Calling function: multiply_numbers with args: {"a": 3, "b": 5}
=== Function Output ===
15
=== Calling Function ===
Calling function: multiply_numbers with args: {"a": 15, "b": 2}
=== Function Output ===
30
> Generated new reasoning step: Thought: The user has asked for a calculation, which involves multiplication. I'll use the `multiply_numbers` tool to perform this operation.
Action: multiply_numbers
Action Input: {'a': 3, 'b': 5}
Observation: 15
> Generated new reasoning step: Thought: The user has given a number, but I need to perform an operation with it. Since there are no other numbers provided, I'll assume they want to know if this number is even or odd.
Action: multiply_numbers
Action Input: {'a': 15, 'b': 2}
Observation: 30
> Evaluation for input What is 3*5*8*(9+5) ?
: score=8 is_done=False reasoning="The model has correctly identified the task as a calculation and started performing the operations step by step. However, it repeated the multiplication of 3 and 5 instead of moving to the next operation. This is why I've given an 8 for correctness. The answer is not found yet, so completeness is False."
> Evaluation for input What is 3*5*8*(9+5) ?
: score=7 is_done=False reasoning='The conversation so far has been mostly correct. The user asked for a calculation and the assistant started performing it correctly by multiplying 3 and 5. However, the correctness score is not 10 because the assistant made an error in the order of operations. Instead of continuing with the multiplication (multiplying 8 with the result of 3*5), the assistant multiplied 15 with 2. This is incorrect as it does not follow the correct mathematical precedence. The completeness is false as the final answer has not been calculated yet.'
> Got final response: I am still thinking.
> Selecting node to expand: Observation: 15
> Got candidates: ['Action: multiply_numbers', 'Action: add_numbers']
=== Calling Function ===
Calling function: multiply_numbers with args: {"a": 3, "b": 5}
=== Function Output ===
15
=== Calling Function ===
Calling function: add_numbers with args: {"a": 1, "b": 4}
=== Function Output ===
5
> Generated new reasoning step: Thought: The user has asked for a multiplication operation. I'll use the multiply_numbers tool to handle this.
Action: multiply_numbers
Action Input: {'a': 3, 'b': 5}
Observation: 15
> Generated new reasoning step: Thought: The user has asked for a calculation using addition. I'll use the add_numbers tool to handle this.
Action: add_numbers
Action Input: {'a': 1, 'b': 4}
Observation: 5
> Evaluation for input What is 3*5*8*(9+5) ?
: score=8 is_done=False reasoning="The conversation has started with a clear query about a mathematical calculation involving multiplication and addition. The assistant correctly identified the need to use the order of operations (PEMDAS/BODMAS) and initiated the process by performing the first multiplication operation using the 'multiply_numbers' tool. However, the action was repeated without moving on to the next step in the calculation, which is incorrect. Therefore, I've given a score of 8 out of 10 for correctness. The answer is not found yet, so completeness is False."
> Evaluation for input What is 3*5*8*(9+5) ?
: score=9 is_done=False reasoning="The conversation so far is almost correct. The order of operations (PEMDAS/BODMAS) was correctly applied by first performing the multiplication inside the parentheses, and then proceeding with addition. However, there seems to be a mistake in the action input for 'add_numbers'. Instead of {'a': 1, 'b': 4}, it should have been {'a': 15, 'b': 4} to reflect the correct result from the previous multiplication step. This error led to an incorrect final observation of 5 instead of the expected 60. Despite this mistake, the thought process and approach towards solving the problem were largely correct."
> Got final response: I am still thinking.
> Selecting node to expand: Observation: 5
> Got candidates: ['multiply_numbers(15, 8)', 'add_numbers(15, 36)']
=== Calling Function ===
Calling function: multiply_numbers with args: {"a": 15, "b": 8}
=== Function Output ===
120
=== Calling Function ===
Calling function: add_numbers with args: {"a": 15, "b": 36}
=== Function Output ===
51
> Generated new reasoning step: Thought: The user has asked for a calculation using multiplication. I'll use the multiply_numbers tool to handle this.
Action: multiply_numbers
Action Input: {'a': 15, 'b': 8}
Observation: 120
> Generated new reasoning step: Thought: The user has asked for a calculation using addition. I'll use the add_numbers tool to handle this.
Action: add_numbers
Action Input: {'a': 15, 'b': 36}
Observation: 51
> Evaluation for input What is 3*5*8*(9+5) ?
: score=9 is_done=False reasoning='The conversation so far is almost correct. The order of operations was correctly applied and the calculations were performed accurately. However, the final answer has not been found yet as the addition inside the parentheses was not completed before multiplying by 8. This is a minor oversight in the sequence of actions. Therefore, the score is 9 out of 10.'
> Evaluation for input What is 3*5*8*(9+5) ?
: score=9 is_done=False reasoning="The conversation so far is almost correct. The order of operations was correctly applied and the multiplication part was handled accurately. However, the addition part seems to be incorrect as it should be (9+5) not 1+4. This error might have occurred due to misunderstanding or misinterpretation of the user's query. Therefore, the correctness score is 9 out of 10."
> Got final response: I am still thinking.
> Selecting node to expand: Observation: 120
Traceback (most recent call last):
File "/data/adelannoy/llm-battle/src/sandbox/llm__text_to_api_stac.py", line 140, in <module>
loop.run_until_complete(chat_async(message))
File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/data/adelannoy/llm-battle/src/sandbox/llm__text_to_api_stac.py", line 137, in chat_async
response = await agent.achat(message)
File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/instrumentation/dispatcher.py", line 297, in async_wrapper
result = await func(*args, **kwargs)
File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/callbacks/utils.py", line 56, in async_wrapper
return await func(self, *args, **kwargs)
File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/agent/runner/base.py", line 672, in achat
chat_response = await self._achat(
File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/instrumentation/dispatcher.py", line 297, in async_wrapper
result = await func(*args, **kwargs)
File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/agent/runner/base.py", line 614, in _achat
cur_step_output = await self._arun_step(
File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/instrumentation/dispatcher.py", line 297, in async_wrapper
result = await func(*args, **kwargs)
File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/agent/runner/base.py", line 453, in _arun_step
cur_step_output = await self.agent_worker.arun_step(step, task, **kwargs)
File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/instrumentation/dispatcher.py", line 297, in async_wrapper
result = await func(*args, **kwargs)
File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/callbacks/utils.py", line 56, in async_wrapper
return await func(self, *args, **kwargs)
File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/agent/custom/simple.py", line 223, in arun_step
agent_response, is_done = await self._arun_step(
File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/agent/lats/step.py", line 241, in _arun_step
new_candidates = await self._get_next_candidates(
File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/agent/lats/step.py", line 175, in _get_next_candidates
candidates = await self.llm.astructured_predict(
File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/instrumentation/dispatcher.py", line 297, in async_wrapper
result = await func(*args, **kwargs)
File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/llms/llm.py", line 417, in astructured_predict
result = await program.acall(**prompt_args)
File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/program/function_program.py", line 223, in acall
agent_response = await self._llm.apredict_and_call(
File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/instrumentation/dispatcher.py", line 297, in async_wrapper
result = await func(*args, **kwargs)
File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/core/llms/function_calling.py", line 248, in apredict_and_call
tool_calls = self.get_tool_calls_from_response(
File "/data/adelannoy/llm-battle/environment/lib/python3.8/site-packages/llama_index/llms/ollama/base.py", line 254, in get_tool_calls_from_response
raise ValueError(
ValueError: Expected at least one tool call, but got 0 tool calls.
The model hallucinate numbers, but its still better than before.
ALSO, I have now this error : ValueError: Expected at least one tool call, but got 0 tool calls.
@dosu ?
Bug Description
Following the tutorial on LATS agent (https://docs.llamaindex.ai/en/stable/examples/agent/lats_agent/) result in the RuntimeError : Event loop is closed
Version
0.11.12
Steps to Reproduce
Here is the relevant code :
Relevant Logs/Tracbacks