acl-org / acl-anthology

Data and software for building the ACL Anthology.
https://aclanthology.org
Apache License 2.0
415 stars 282 forks source link

Updating Paper Metadata for 2024.eacl-demo.23 #3177

Closed firojalam closed 3 weeks ago

firojalam commented 6 months ago

Confirm that this is a metadata correction

Anthology ID

2024.eacl-demo.23

Type of Paper Metadata Correction

Correction to Paper Title

No response

Correction to Paper Abstract

No response

Correction to Author Name(s)

@inproceedings{dalvi-etal-2024-llmebench, title = "{LLM}e{B}ench: A Flexible Framework for Accelerating {LLM}s Benchmarking", author = "Dalvi, Fahim and Hasanain, Maram and Boughorbel, Sabri and Mousi, Basel and Abdaljalil, Samir and Nazar, Nizi and Abdelali, Ahmed and Chowdhury, Shammur Absar and Mubarak, Hamdy and Ali, Ahmed and Hawasly, Majd and Durrani, Nadir and Alam, Firoj", editor = "Aletras, Nikolaos and De Clercq, Orphee", booktitle = "Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations", month = mar, year = "2024", address = "St. Julians, Malta", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.eacl-demo.23", pages = "214--222", abstract = "The recent development and success of Large Language Models (LLMs) necessitate an evaluation of their performance across diverse NLP tasks in different languages. Although several frameworks have been developed and made publicly available, their customization capabilities for specific tasks and datasets are often complex for different users. In this study, we introduce the LLMeBench framework, which can be seamlessly customized to evaluate LLMs for any NLP task, regardless of language. The framework features generic dataset loaders, several model providers, and pre-implements most standard evaluation metrics. It supports in-context learning with zero- and few-shot settings. A specific dataset and task can be evaluated for a given LLM in less than 20 lines of code while allowing full flexibility to extend the framework for custom datasets, models, or tasks. The framework has been tested on 31 unique NLP tasks using 53 publicly available datasets within 90 experimental setups, involving approximately 296K data points. We open-sourced LLMeBench for the community (https://github.com/qcri/LLMeBench/) and a video demonstrating the framework is available online (https://youtu.be/9cC2m{_}abk3A)).", }

anthology-assist commented 5 months ago

@firojalam Could you highlight missing or incorrect author names?

firojalam commented 5 months ago

@anthology-assist , The following authors name are missing:

Hawasly, Majd and Durrani, Nadir and Alam, Firoj

Thank you

mjpost commented 3 months ago

This correction was made to the wrong paper. Can we please carefully check the author lists for the following two papers?

firojalam commented 3 months ago

Dear Matt, Thanks for your email. Unfortunately, now bibs are incorrect for both of our papers.

Regards Firoj

................ Firoj Alam, PhD http://sites.google.com/site/firojalam/

On Thu, 4 Jul 2024 at 5:21 AM Matt Post @.***> wrote:

This correction was made to the wrong paper. Can we please carefully check the author lists for the following two papers?

— Reply to this email directly, view it on GitHub https://github.com/acl-org/acl-anthology/issues/3177#issuecomment-2207880511, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABABTSDJGQI5NK2KTGWO2CLZKSWQ7AVCNFSM6AAAAABFTKXB5SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMBXHA4DANJRGE . You are receiving this because you were mentioned.Message ID: @.***>

anthology-assist commented 3 weeks ago

This issue has been solved.