RUCAIBox / LLMSurvey

The official GitHub page for the survey paper "A Survey of Large Language Models".
https://arxiv.org/abs/2303.18223
9.64k stars 745 forks source link

Add a new paper about LLMs as evaluators #63

Closed Wangpeiyi9979 closed 10 months ago

Wangpeiyi9979 commented 10 months ago

Thank you for your excellent survey.

Please consider adding our work: Large Language Models are not Fair Evaluators (https://arxiv.org/abs/2305.17926), to the list.

Our research has identified a positional bias while using LLMs as evaluators, and we have proposed three strategies to alleviate this bias.

Thanks.😊

wxl1999 commented 10 months ago

Thanks for your suggested papers. We will consider adding your paper into the future version. For GitHub update, you may directly create a pull request into suitable part.

BTW, we have read your paper and find that it is also with the context of LLMs. We suggest you also discuss or cite our survey paper as a reference to the general introduction of LLMs.

Wangpeiyi9979 commented 10 months ago

Thank you, we will ensure its inclusion in our upcoming revision.😊