Closed fariazz closed 5 months ago
To resolve the "RuntimeError: Event loop is closed" error when using the Evaluator with MistralAI in your Flask app, you need to ensure that the event loop is properly managed. This error typically occurs when the event loop is closed before the asynchronous tasks are completed.
One common solution is to use the asyncio.run
function to manage the event loop. Here is an example of how you can modify your Flask app to handle this:
import asyncio
from flask import Flask, request, jsonify
from llama_index.legacy.llms.mistral import MistralAI
app = Flask(__name__)
# Initialize MistralAI with your API key
mistral_ai = MistralAI(api_key="your_api_key_here")
@app.route('/evaluate', methods=['POST'])
def evaluate():
data = request.json
query_escaped = data['query']
response_escaped = data['response']
# Use asyncio.run to manage the event loop
eval_result_obj = asyncio.run(evaluator.aevaluate(
query=query_escaped,
response=response_escaped
))
return jsonify(eval_result_obj)
if __name__ == '__main__':
app.run()
In this example, the asyncio.run
function is used to run the asynchronous aevaluate
function, ensuring that the event loop is properly managed and preventing the "RuntimeError: Event loop is closed" error [1].
To continue talking to Dosu, mention @dosu.
To resolve the "RuntimeError: Event loop is closed" error when using the Evaluator with MistralAI in your Flask app, you need to ensure that the event loop is properly managed. This error typically occurs when the event loop is closed before the asynchronous tasks are completed.
One common solution is to use the
asyncio.run
function to manage the event loop. Here is an example of how you can modify your Flask app to handle this:import asyncio from flask import Flask, request, jsonify from llama_index.legacy.llms.mistral import MistralAI app = Flask(__name__) # Initialize MistralAI with your API key mistral_ai = MistralAI(api_key="your_api_key_here") @app.route('/evaluate', methods=['POST']) def evaluate(): data = request.json query_escaped = data['query'] response_escaped = data['response'] # Use asyncio.run to manage the event loop eval_result_obj = asyncio.run(evaluator.aevaluate( query=query_escaped, response=response_escaped )) return jsonify(eval_result_obj) if __name__ == '__main__': app.run()
In this example, the
asyncio.run
function is used to run the asynchronousaevaluate
function, ensuring that the event loop is properly managed and preventing the "RuntimeError: Event loop is closed" error [1].To continue talking to Dosu, mention @dosu.
As per my bug report, the issue also takes place when using asyncio. This is a poor quality AI response.
Also the main thing is why does it work like a charm with all other LLMs except for the llama-index-llms-mistralai
, which happens to be the model we are trying to use.
The "RuntimeError: Event loop is closed" error occurs specifically with the llama-index-llms-mistralai
model because it uses asynchronous operations with the MistralAsyncClient
. This error typically happens when the event loop is closed before the asynchronous operations complete. Other LLMs you mentioned might not use asynchronous operations in the same way, which is why they don't encounter this issue.
To resolve this issue, ensure that the event loop is properly managed and not closed prematurely. You can use the asyncio.run()
function to run your asynchronous code, which ensures that the event loop is properly handled.
Here is an example of how you can modify your code to use asyncio.run()
:
import asyncio
from flask import Flask, request, jsonify
from llama_index.legacy.llms.mistral import MistralAI
app = Flask(__name__)
# Initialize MistralAI with your API key
mistral_ai = MistralAI(api_key="your_api_key_here")
@app.route('/evaluate', methods=['POST'])
def evaluate():
data = request.json
query_escaped = data['query']
response_escaped = data['response']
# Use asyncio.run to manage the event loop
eval_result_obj = asyncio.run(evaluator.aevaluate(
query=query_escaped,
response=response_escaped
))
return jsonify(eval_result_obj)
if __name__ == '__main__':
app.run()
This ensures that the event loop is properly managed and prevents the "RuntimeError: Event loop is closed" error. Additionally, make sure you have the mistralai
package installed, as it is required for the MistralAI
class to function correctly. You can install it using:
pip install mistralai
The key difference with the llama-index-llms-mistralai
model is its reliance on asynchronous operations, which requires careful event loop management [1][2].
@logan-markewich, I've hit a roadblock with this one, your insights would be highly valuable.
To continue talking to Dosu, mention @dosu.
@dosu the base evaluate
method is already wrapped in async.run()
, so using or not using async.run
doesn't make a difference in my code (I've already tested both). For reference:
The "RuntimeError: Event loop is closed" issue in your Flask app when using the llama-index-llms-mistralai
model is likely due to the improper handling of asynchronous code within a synchronous context. The evaluate
method in the BaseEvaluator
class uses asyncio_run
to run the asynchronous aevaluate
method. If the event loop is not managed correctly, it can lead to the "Event loop is closed" error.
To resolve this issue, ensure that the event loop is properly managed. One way to do this is to explicitly create and run a new event loop if the current one is closed. Here is an example of how you can modify the evaluate
method to handle this:
import asyncio
class BaseEvaluator(PromptMixin):
# ... other methods ...
def evaluate(
self,
query: Optional[str] = None,
response: Optional[str] = None,
contexts: Optional[Sequence[str]] = None,
**kwargs: Any,
) -> EvaluationResult:
"""Run evaluation with query string, retrieved contexts,
and generated response string.
Subclasses can override this method to provide custom evaluation logic and
take in additional arguments.
"""
try:
return asyncio_run(
self.aevaluate(
query=query,
response=response,
contexts=contexts,
**kwargs,
)
)
except RuntimeError as e:
if str(e) == "Event loop is closed":
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
return loop.run_until_complete(
self.aevaluate(
query=query,
response=response,
contexts=contexts,
**kwargs,
)
)
else:
raise e
Additionally, ensure that the MistralAI
class is properly initialized and used. The MistralAI
class from the llama-index-llms-mistralai
package requires proper setup, including setting the API key and managing asynchronous methods correctly. Here is an example of how to initialize and use the MistralAI
class:
from llama_index.llms.mistralai import MistralAI
# Initialize the MistralAI model
llm = MistralAI(api_key="<your_api_key>")
# Example usage of the complete method
resp = llm.complete("Paul Graham is ")
print(resp)
Ensure that you handle asynchronous methods like achat
and astream_chat
properly within your Flask app to avoid event loop issues [1][2].
To continue talking to Dosu, mention @dosu.
What you are proposing is modifying the llama index library itself. I think that's a bit out of scope, since the issue is only affecting mistral.
event loop closed usually means there is some other error, causing the event loop to close in the first place.
Thus can be somewhat difficult to debug because of how asyncio can swallow errors like this. It's likely related to the llm call, which you could test your self easily with await llm.acomplete("test")
surrounded by a try/catch to catch any obvious errors.
Or, don't use flask and use fastapi (I'm not 100% sure how well flask even supports async)
Thanks @logan-markewich , we decided to switch to using llama-index-llms-openailike
instead, with another LLM provider, since the issue only occurs with llama-index-llms-mistralai
. I understand that Flask is not the ideal place to do async calls, but that setup works for our use case and seems fine in production so far.
For the record it was just the evaluator that was giving trouble. Normal engine chat
calls with MistralAI were working like a charm.
Anyway feel free to close the issue since we are using Flask which is not the ideal framework to use async apis (and thanks for the Fastapi suggestion, will probably port it over to that if we have more issues).
Bug Description
I'm using the Evaluator within a Flask app.
This works fine with all these LLMs:
But fails with MistralAI: llama-index-llms-mistralai
Version
llama-index-core==0.10.39, llama-index-llms-mistralai==0.1.15
Steps to Reproduce
This is the code I use in my Flask app and the issue happens:
Also the async version fails with the same error:
Relevant Logs/Tracbacks