Closed prabham17 closed 2 months ago
@prabham17 I'm not sure what the issue could be since we didn't release anything recently. Could you share the stack trace if possible?
I am facing similar problem. Here is the description: I am using "mistralai/Mixtral-8x22B-Instruct-v0.1" running on VLLM on OpenAI like server. Also I am running the inference on 4 A-100 GPU cluster. The model is not able to evaluate the following ragas metrics successfully and giving the ExceptionInRunner error:
faithfulness, ##working for small #for long getting ExceptionInRunner
answer_correctness, #working for small #for long getting ExceptionInRunner
The error appears as given below:
Also tried with a slight change of setting is_async=True, still got the same result:
It's worth mentioning here that the aforementioned metrics are successfully running for smaller set of input samples, but giving the cited error for bigger sample size. For example: When I was experimenting earlier using only 5 samples, all three metrics were working just fine. However, when I ran them over 150 samples of data, they are not working.
set raise_exceptions=False may be helpful
I did try that. But it was not working. The current fix is to increase the timeout to '600' in ragas.metrics.base.py (setting high value). It's working for the most part, but still facing ExceptionInRunner problem, rarely tho.
hope it was fixed with the above PR
let us know if you face any more issues! sorry about this one
I am using RAGS evaluate method to do evaluation on 35 Test dataset with groundtruth. Its completing the evaluation and failing at the last step with this error. I have added "raise_exceptions=False" also, Still same issue.It was working 1 week before, suddenly is any update happened? I am running in Databrick