explodinggradients / ragas

Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
https://docs.ragas.io
Apache License 2.0
6.6k stars 648 forks source link

Why is Context Relevancy removed from official documentation? #1330

Open linzzzzzz opened 1 day ago

linzzzzzz commented 1 day ago

[ ] I checked the documentation and related resources and couldn't find an answer to my question.

Your Question Why is Context Relevancy removed from official documentation?

I can see it in the older version of doc. https://docs.ragas.io/en/v0.1.0/concepts/metrics/context_relevancy.html

But not in the latest doc. https://docs.ragas.io/en/stable/concepts/metrics/context_relevancy.html

Why is that?

Code Examples This community speaks code. Share your code snippets to help us understand your question better.

Additional context Anything else you want to share with us?

dosubot[bot] commented 1 day ago

Hello @linzzzzzz! I'm here to help you with any bugs, questions, or contributions you have. Let's dive into the issue you're experiencing!

The "Context Relevancy" metric was removed from the latest version of the Ragas documentation because it has been deprecated. The ContextPrecision class, which was previously used for context relevancy, is now marked as deprecated with the following decorator:

@deprecated(
    since="0.2", removal="0.3", alternative="LLMContextPrecisionWithReference"
)

This indicates that the ContextPrecision metric is deprecated since version 0.2 and will be removed in version 0.3, with LLMContextPrecisionWithReference suggested as the alternative [1].

To continue talking to Dosu, mention @dosu.

linzzzzzz commented 1 day ago

What you said doesn't tell the "why".

dosubot[bot] commented 1 day ago

The specific reasons for the removal of the "Context Relevancy" metric from the documentation are not explicitly provided in the available resources. However, the deprecation of the ContextPrecision metric, which was used for context relevancy, suggests a shift towards more advanced and accurate metrics like LLMContextPrecisionWithReference. This new metric offers improvements such as the inclusion of a reference answer for more accurate evaluation, prompt customization, and reproducibility control, making it a more robust and reliable choice for evaluating context precision [1].

To continue talking to Dosu, mention @dosu.