Open rcruzgar opened 5 days ago
"response" is the generated from your model not the test generator, same for "retrieved_contexts". Your evaluation dataset should have both the fields from test generator user_input | reference_contexts | reference
and added to them values returned from your model: response | retrieved_contexts
It is not a bug, even here in the documentation:https://docs.ragas.io/en/stable/howtos/integrations/_llamaindex/ You have clearly stated the output of the test_set generator. You can easily transform test_set to the expected format, but you need to remember about answers from your model which are necessary to have correct evaluation :)
Hi! I am trying to generate a test set using the following code:
Ragas version: 0.2.3 Python version: 3.11.9
My documents variable is a list of _langchaincore.documents.base.Document. each of them containing a page_content and a metadata={'title'} keys.
However I only obtain the following output columns:
I suppose "reference_contexts" is the "retrieved_contexts" needed by the metrics later on (so I changed the column name), but I don't get the "response" field, needed for example for FactualCorrectness and SemanticSimilarity metrics calculation.
Is there a way to get "response" and not just "reference"? Otherwise I will get a semantic similarity of 100% by duplicating "reference" as "response".
Best regards, Rubén.