MLGroupJLU / LLM-eval-survey

The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".
https://arxiv.org/abs/2307.03109
1.38k stars 86 forks source link

Suggestion about adding one evaluation paper about LLMs in science #9

Closed taichengguo closed 1 year ago

taichengguo commented 1 year ago

Thanks for your interesting and comprehensive survey.

If possible, please consider adding our evaluation work about LLMs in chemistry, "What indeed can GPT models do in chemistry? A comprehensive benchmark on eight tasks" (https://arxiv.org/abs/2305.18365) to the list.

Our work mainly establish a comprehensive benchmark containing 8 practical chemistry tasks to evaluate LLMs (GPT-4, GPT-3.5,and Davinci-003) for each chemistry task in zero-shot and few-shot in-context learning settings. We aim to solve the lack of comprehensive assessment of LLMs in the field of chemistry.

Thanks! 😊

MLGroupJLU commented 1 year ago

Thank you for your interest in our paper.
Your research is valuable and well done and we plan to include it in our survey.
After a while, we will upload the updated version of the paper to arXiv, looking forward to your attention.

taichengguo commented 1 year ago

Thanks!