Closed WanFeng123313 closed 4 weeks ago
There are a few options. First, if you have ground truth, you can directly use it for evaluation. Second, you can use an LLM to generate corresponding queries and ground truth for your documents (though generating ground truth can be challenging for queries that need to consider the entire dataset). Third, if building ground truth is difficult, you can refer to the steps in the reproduce section to generate corresponding queries for your data, then perform one-to-one comparisons with other models.
Thank you for your answer!
Hi author, I tested your Lightrag on some of my documents and I am very satisfied with the results, but I see that your evaluation is on some datasets, how can I evaluate on my documents to form multiple evaluation metrics