OpenLMLab / LEval

[ACL'24 Oral] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark
GNU General Public License v3.0
314 stars 13 forks source link

Where is the open-ened tasks datasets used for GPT-4/GPT-3.5 evalutation #12

Closed altctrl00 closed 3 months ago

altctrl00 commented 3 months ago

Hi, I wonder where is the "96-question" and "85+96 question" mentioned in Table 4 of your paper?

altctrl00 commented 3 months ago

Is it in the LEval-data/Open-ended-tasks and the "evaluation" key is "LLM"?

ChenxinAn-fdu commented 3 months ago

Yes!