explodinggradients / ragas

Supercharge Your LLM Application Evaluations 🚀
https://docs.ragas.io
Apache License 2.0
7.03k stars 710 forks source link

The runner thread which was running the jobs raised an exeception. Read the traceback above to debug it. You can also pass `raise_exceptions=False` incase you want to show only a warning message instead ( For Ragas Version 0.1.10) #1105

Open mb16biswas opened 3 months ago

mb16biswas commented 3 months ago

Description:

Hi,

I am currently using the latest version of Ragas (version 0.1.10) and encountering an error when using Ragas with local LLMs from Ollama. Here is my code:

from datasets import load_dataset
import ragas

from ragas.metrics import (
    answer_relevancy,
    faithfulness,
    context_recall,
    context_precision,
)
from langchain_community.chat_models import ChatOllama
from ragas import evaluate
from langchain_community.embeddings import OllamaEmbeddings

# Loading the V2 dataset
amnesty_qa = load_dataset("explodinggradients/amnesty_qa", "english_v2", trust_remote_code=True)
amnesty_subset = amnesty_qa["eval"].select(range(2))

df = amnesty_subset.to_pandas()

langchain_llm = ChatOllama(model="moondream")
langchain_embeddings = OllamaEmbeddings(model="moondream")

result = evaluate(
    amnesty_subset,
    metrics=[context_precision, faithfulness, answer_relevancy, context_recall],
    llm=langchain_llm,
    embeddings=langchain_embeddings
)

The Error Message: ExceptionInRunner: The runner thread which was running the jobs raised an exception. Read the traceback above to debug it. You can also pass raise_exceptions=False in case you want to show only a warning message instead.

Current Solution

However, if I downgrade the Ragas version from 0.1.10 to 0.1.7, the error is resolved. I thought I should report this issue, thus I reported it here.

Thank you for your assistance.

dosubot[bot] commented 3 months ago

Hi @mb16biswas! I'm here to help you with any bugs or questions you may have. I can also assist you in becoming a contributor. Let's work on solving this issue together.

To resolve the "ExceptionInRunner" error when using Ragas version 0.1.10 with local LLMs from Ollama, you can pass the parameter raise_exceptions=False to show only a warning message instead of raising an exception. This can help you debug the issue by providing a warning message without stopping the execution.

Here is an example of how you might modify your code to include this parameter:

from datasets import load_dataset
import ragas

from ragas.metrics import (
    answer_relevancy,
    faithfulness,
    context_recall,
    context_precision,
)
from langchain_community.chat_models import ChatOllama
from ragas import evaluate
from langchain_community.embeddings import OllamaEmbeddings

# Loading the V2 dataset
amnesty_qa = load_dataset("explodinggradients/amnesty_qa", "english_v2", trust_remote_code=True)
amnesty_subset = amnesty_qa["eval"].select(range(2))

df = amnesty_subset.to_pandas()

langchain_llm = ChatOllama(model="moondream")
langchain_embeddings = OllamaEmbeddings(model="moondream")

try:
    result = evaluate(
        amnesty_subset,
        metrics=[context_precision, faithfulness, answer_relevancy, context_recall],
        llm=langchain_llm,
        embeddings=langchain_embeddings,
        raise_exceptions=False
    )
except Exception as e:
    print(e)
    # Additional debugging or logging can be done here

This approach will allow you to see a warning message and continue execution, which can be useful for debugging the underlying issue [1][2][3].

To continue talking to Dosu, mention @dosu.

devblend360 commented 3 months ago

Add the following lines in your code It worked for me

import nest_asyncio nest_asyncio.apply()

jjmachan commented 3 months ago

@mb16biswas can you try out @devblend360 method or update to v0.1.11 which basically does the same behind the scenes and let us know if that fixes it?

thanks for the heads up 🙂 !