run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
33.68k stars 4.73k forks source link

[Bug]: "Event Loop Closed" error in Evaluator calls - only affects MistralAI (llama-index-llms-mistralai) #13827

Closed fariazz closed 1 month ago

fariazz commented 1 month ago

Bug Description

I'm using the Evaluator within a Flask app.

This works fine with all these LLMs:

But fails with MistralAI: llama-index-llms-mistralai

Version

llama-index-core==0.10.39, llama-index-llms-mistralai==0.1.15

Steps to Reproduce

This is the code I use in my Flask app and the issue happens:

evaluator.evaluate(
                    query=query_escaped,
                    response=response_escaped
                )

Also the async version fails with the same error:

eval_result_obj = asyncio.run(evaluator.aevaluate(
                    query=query_escaped,
                    response=response_escaped
                ))

Relevant Logs/Tracbacks

File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\flask\app.py", line 1488, in __call__
    return self.wsgi_app(environ, start_response)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\flask\app.py", line 1466, in wsgi_app
    response = self.handle_exception(e)
               ^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\flask\app.py", line 1463, in wsgi_app
    response = self.full_dispatch_request()
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\flask\app.py", line 872, in full_dispatch_request
    rv = self.handle_user_exception(e)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\flask\app.py", line 870, in full_dispatch_request
    rv = self.dispatch_request()
         ^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\flask\app.py", line 855, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)  # type: ignore[no-any-return]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\projects\zva-course-support-chatbot\app.py", line 225, in chat
    eval_result_obj = asyncio.run(evaluator.aevaluate(
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\asyncio\runners.py", line 194, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\asyncio\runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\asyncio\base_events.py", line 685, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\llama_index\core\evaluation\guideline.py", line 112, in aevaluate
    eval_response = await self._llm.apredict(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\llama_index\core\instrumentation\dispatcher.py", line 307, in async_wrapper
    result = await func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\llama_index\core\llms\llm.py", line 526, in apredict
    chat_response = await self.achat(messages)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\llama_index\core\instrumentation\dispatcher.py", line 307, in async_wrapper
    result = await func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\llama_index\core\llms\callbacks.py", line 73, in wrapped_async_llm_chat
    f_return_val = await f(_self, messages, **kwargs)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\llama_index\llms\mistralai\base.py", line 299, in achat
    response = await self._aclient.chat(messages=messages, **all_kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\mistralai\async_client.py", line 206, in chat
    async for response in single_response:
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\mistralai\async_client.py", line 130, in _request
    response = await self._client.request(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx\_client.py", line 1530, in request
    return await self.send(request, auth=auth, follow_redirects=follow_redirects)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx\_client.py", line 1617, in send
    response = await self._send_handling_auth(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx\_client.py", line 1645, in _send_handling_auth
    response = await self._send_handling_redirects(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx\_client.py", line 1682, in _send_handling_redirects
    response = await self._send_single_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx\_client.py", line 1719, in _send_single_request
    response = await transport.handle_async_request(request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpx\_transports\default.py", line 366, in handle_async_request
    resp = await self._pool.handle_async_request(req)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpcore\_async\connection_pool.py", line 234, in handle_async_request
    await self._close_expired_connections()
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpcore\_async\connection_pool.py", line 195, in _close_expired_connections
    await connection.aclose()
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpcore\_async\connection.py", line 173, in aclose
    await self._connection.aclose()
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpcore\_async\http11.py", line 253, in aclose
    await self._network_stream.aclose()
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\httpcore\_backends\anyio.py", line 54, in aclose
    await self._stream.aclose()
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\anyio\streams\tls.py", line 202, in aclose
    await self.transport_stream.aclose()
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\anyio\_backends\_asyncio.py", line 1181, in aclose
    self._transport.close()
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\asyncio\proactor_events.py", line 109, in close
    self._loop.call_soon(self._call_connection_lost, None)
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\asyncio\base_events.py", line 793, in call_soon
    self._check_closed()
  File "c:\Users\MYUSER\AppData\Local\Programs\Python\Python312\Lib\asyncio\base_events.py", line 540, in _check_closed
    raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
dosubot[bot] commented 1 month ago

To resolve the "RuntimeError: Event loop is closed" error when using the Evaluator with MistralAI in your Flask app, you need to ensure that the event loop is properly managed. This error typically occurs when the event loop is closed before the asynchronous tasks are completed.

One common solution is to use the asyncio.run function to manage the event loop. Here is an example of how you can modify your Flask app to handle this:

import asyncio
from flask import Flask, request, jsonify
from llama_index.legacy.llms.mistral import MistralAI

app = Flask(__name__)

# Initialize MistralAI with your API key
mistral_ai = MistralAI(api_key="your_api_key_here")

@app.route('/evaluate', methods=['POST'])
def evaluate():
    data = request.json
    query_escaped = data['query']
    response_escaped = data['response']

    # Use asyncio.run to manage the event loop
    eval_result_obj = asyncio.run(evaluator.aevaluate(
        query=query_escaped,
        response=response_escaped
    ))

    return jsonify(eval_result_obj)

if __name__ == '__main__':
    app.run()

In this example, the asyncio.run function is used to run the asynchronous aevaluate function, ensuring that the event loop is properly managed and preventing the "RuntimeError: Event loop is closed" error [1].

To continue talking to Dosu, mention @dosu.

fariazz commented 1 month ago

To resolve the "RuntimeError: Event loop is closed" error when using the Evaluator with MistralAI in your Flask app, you need to ensure that the event loop is properly managed. This error typically occurs when the event loop is closed before the asynchronous tasks are completed.

One common solution is to use the asyncio.run function to manage the event loop. Here is an example of how you can modify your Flask app to handle this:

import asyncio
from flask import Flask, request, jsonify
from llama_index.legacy.llms.mistral import MistralAI

app = Flask(__name__)

# Initialize MistralAI with your API key
mistral_ai = MistralAI(api_key="your_api_key_here")

@app.route('/evaluate', methods=['POST'])
def evaluate():
    data = request.json
    query_escaped = data['query']
    response_escaped = data['response']

    # Use asyncio.run to manage the event loop
    eval_result_obj = asyncio.run(evaluator.aevaluate(
        query=query_escaped,
        response=response_escaped
    ))

    return jsonify(eval_result_obj)

if __name__ == '__main__':
    app.run()

In this example, the asyncio.run function is used to run the asynchronous aevaluate function, ensuring that the event loop is properly managed and preventing the "RuntimeError: Event loop is closed" error [1].

To continue talking to Dosu, mention @dosu.

As per my bug report, the issue also takes place when using asyncio. This is a poor quality AI response.

fariazz commented 1 month ago

Also the main thing is why does it work like a charm with all other LLMs except for the llama-index-llms-mistralai, which happens to be the model we are trying to use.

dosubot[bot] commented 1 month ago

The "RuntimeError: Event loop is closed" error occurs specifically with the llama-index-llms-mistralai model because it uses asynchronous operations with the MistralAsyncClient. This error typically happens when the event loop is closed before the asynchronous operations complete. Other LLMs you mentioned might not use asynchronous operations in the same way, which is why they don't encounter this issue.

To resolve this issue, ensure that the event loop is properly managed and not closed prematurely. You can use the asyncio.run() function to run your asynchronous code, which ensures that the event loop is properly handled.

Here is an example of how you can modify your code to use asyncio.run():

import asyncio
from flask import Flask, request, jsonify
from llama_index.legacy.llms.mistral import MistralAI

app = Flask(__name__)

# Initialize MistralAI with your API key
mistral_ai = MistralAI(api_key="your_api_key_here")

@app.route('/evaluate', methods=['POST'])
def evaluate():
    data = request.json
    query_escaped = data['query']
    response_escaped = data['response']

    # Use asyncio.run to manage the event loop
    eval_result_obj = asyncio.run(evaluator.aevaluate(
        query=query_escaped,
        response=response_escaped
    ))

    return jsonify(eval_result_obj)

if __name__ == '__main__':
    app.run()

This ensures that the event loop is properly managed and prevents the "RuntimeError: Event loop is closed" error. Additionally, make sure you have the mistralai package installed, as it is required for the MistralAI class to function correctly. You can install it using:

pip install mistralai

The key difference with the llama-index-llms-mistralai model is its reliance on asynchronous operations, which requires careful event loop management [1][2].

@logan-markewich, I've hit a roadblock with this one, your insights would be highly valuable.

To continue talking to Dosu, mention @dosu.

fariazz commented 1 month ago

@dosu the base evaluate method is already wrapped in async.run(), so using or not using async.run doesn't make a difference in my code (I've already tested both). For reference:

https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/evaluation/base.py#L49-L69

dosubot[bot] commented 1 month ago

The "RuntimeError: Event loop is closed" issue in your Flask app when using the llama-index-llms-mistralai model is likely due to the improper handling of asynchronous code within a synchronous context. The evaluate method in the BaseEvaluator class uses asyncio_run to run the asynchronous aevaluate method. If the event loop is not managed correctly, it can lead to the "Event loop is closed" error.

To resolve this issue, ensure that the event loop is properly managed. One way to do this is to explicitly create and run a new event loop if the current one is closed. Here is an example of how you can modify the evaluate method to handle this:

import asyncio

class BaseEvaluator(PromptMixin):
    # ... other methods ...

    def evaluate(
        self,
        query: Optional[str] = None,
        response: Optional[str] = None,
        contexts: Optional[Sequence[str]] = None,
        **kwargs: Any,
    ) -> EvaluationResult:
        """Run evaluation with query string, retrieved contexts,
        and generated response string.

        Subclasses can override this method to provide custom evaluation logic and
        take in additional arguments.
        """
        try:
            return asyncio_run(
                self.aevaluate(
                    query=query,
                    response=response,
                    contexts=contexts,
                    **kwargs,
                )
            )
        except RuntimeError as e:
            if str(e) == "Event loop is closed":
                loop = asyncio.new_event_loop()
                asyncio.set_event_loop(loop)
                return loop.run_until_complete(
                    self.aevaluate(
                        query=query,
                        response=response,
                        contexts=contexts,
                        **kwargs,
                    )
                )
            else:
                raise e

Additionally, ensure that the MistralAI class is properly initialized and used. The MistralAI class from the llama-index-llms-mistralai package requires proper setup, including setting the API key and managing asynchronous methods correctly. Here is an example of how to initialize and use the MistralAI class:

from llama_index.llms.mistralai import MistralAI

# Initialize the MistralAI model
llm = MistralAI(api_key="<your_api_key>")

# Example usage of the complete method
resp = llm.complete("Paul Graham is ")
print(resp)

Ensure that you handle asynchronous methods like achat and astream_chat properly within your Flask app to avoid event loop issues [1][2].

To continue talking to Dosu, mention @dosu.

fariazz commented 1 month ago

What you are proposing is modifying the llama index library itself. I think that's a bit out of scope, since the issue is only affecting mistral.

logan-markewich commented 1 month ago

event loop closed usually means there is some other error, causing the event loop to close in the first place.

Thus can be somewhat difficult to debug because of how asyncio can swallow errors like this. It's likely related to the llm call, which you could test your self easily with await llm.acomplete("test") surrounded by a try/catch to catch any obvious errors.

logan-markewich commented 1 month ago

Or, don't use flask and use fastapi (I'm not 100% sure how well flask even supports async)

fariazz commented 1 month ago

Thanks @logan-markewich , we decided to switch to using llama-index-llms-openailike instead, with another LLM provider, since the issue only occurs with llama-index-llms-mistralai. I understand that Flask is not the ideal place to do async calls, but that setup works for our use case and seems fine in production so far.

For the record it was just the evaluator that was giving trouble. Normal engine chat calls with MistralAI were working like a charm.

Anyway feel free to close the issue since we are using Flask which is not the ideal framework to use async apis (and thanks for the Fastapi suggestion, will probably port it over to that if we have more issues).