explodinggradients / ragas

Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
https://docs.ragas.io
Apache License 2.0
5.65k stars 526 forks source link

run_config missing #1021

Closed Mtuthuko closed 1 day ago

Mtuthuko commented 4 weeks ago

[ ] I have checked the documentation and related resources and couldn't resolve my bug.

Describe the bug I got this error TypeError: MetricWithLLM.init() missing 1 required positional argument: 'run_config' when I was trying to calculate the ragas scores

Ragas version: 0.1.9 Python version: 3.11

Code to Reproduce

from ragas import evaluate
from langchain_openai.chat_models import AzureChatOpenAI
from langchain_openai.embeddings import AzureOpenAIEmbeddings
from ragas.llms import LangchainLLMWrapper

from ragas.metrics import (
    answer_relevancy,
    faithfulness,
    ContextRelevancy,
)

azure_model = AzureChatOpenAI(
     deployment_name="xxxxxx",
     model="xxxxxx",
     api_version ="xxxxxx",
     openai_api_type="azure"
)

ragas_azure_model =LangchainLLMWrapper(azure_model)

azure_embeddings = AzureOpenAIEmbeddings(
    deployment="xxxxxx",
    model = "xxxxxx",
    api_version ="xxxxxx",
     openai_api_type="azure"
)

metrics = [
    faithfulness,
    answer_relevancy,
    ContextRelevancy
]

for m in metrics:
    m.__setattr__('llm',ragas_azure_model)

questions = df.Query.values
rag_answers = df.Answers.values
contexts = df.Context.values

from datasets import Dataset

data = {
    "question": questions,
    "answer": rag_answers,
    "contexts": contexts
}

dataset = Dataset.from_dict(data)

result = evaluate(dataset=dataset,
                  metrics=metrics,
                  llm = ragas_azure_model,
                  embeddings = azure_embeddings)

Error trace

image image

Expected behavior For the code to calculate the ragas scores.

Sudhakar17 commented 2 weeks ago

@Mtuthuko did you resolve this error? I am also facing the exact problem

jjmachan commented 1 week ago

hey @Mtuthuko were you able to fix this?

it seems like a small bug in the code

from ragas.metrics import (
    answer_relevancy,
    faithfulness,
    # ContextRelevancy, # old
    content_relavancy
)

metrics = [
    faithfulness,
    answer_relevancy,
    context_relevancy
]

this should fix it. let me know if its still an issue

@Sudhakar17 could you share your code snippet and error msg? I'll check it

github-actions[bot] commented 1 day ago

Closing after 8 days of waiting for the additional info requested.