MLGroupJLU / LLM-eval-survey

The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".
https://arxiv.org/abs/2307.03109
1.38k stars 86 forks source link

Add a new paper. #3

Closed Wangpeiyi9979 closed 1 year ago

Wangpeiyi9979 commented 1 year ago

Thank you for your nice survey.

Please consider adding our recent work, Large Language Models are not Fair Evaluators (https://arxiv.org/abs/2305.17926), to the list.

Our research has identified the biases present while using LLM as an evaluator, and we have proposed two strategies to alleviate these biases.

Thanks.😊

cyp-jlu-ai commented 1 year ago

Thank you for your nice survey.

Please consider adding our recent work, Large Language Models are not Fair Evaluators (https://arxiv.org/abs/2305.17926), to the list.

Our research has identified the biases present while using LLM as an evaluator, and we have proposed two strategies to alleviate these biases.

Thanks.😊

Thank you for bringing your recent work to our attention. We appreciate your interest in our survey. We will carefully consider adding your work, "Large Language Models are not Fair Evaluators," to the list of references. Your research is valuable and aligns well with the theme of our survey. We acknowledge the importance of addressing fairness issues in evaluating LLMs and look forward to incorporating your contribution into our work. Thank you once again for your suggestion.