Closed lkra closed 1 year ago
Thanks for submitting a task to GenBench. Please be aware that you're submitting the data files of the task in your PR. For the final submission, you will need to host the dataset files somewhere else (preferably as a HuggingFace dataset).
Also, as side node, preparation_strategies
field in the config.jsonnet is left empty, which will break the framework. In the final submission, this needs to be resolved as well.
Hello!
We are getting quite close to the deadline (September 1, 11:59PM anywhere on earth), which is why I wanted to remind you of the fact that your PR still needs some attention: please double-check the automated checks that failed, and take the message that Amir posted last week into account.
Please don't forget to submit your accompanying paper to Openreview via https://openreview.net/group?id=GenBench.org/2023/Workshop by September 1.
Good luck finalising your PR and paper, feel free to tag us if you have questions. Cheers, Verna On behalf of the GenBench team
Cross-lingual Local QA
Given the large presence of local knowledge in popular QA datasets (e.g., TriviaQA) that predominantly probe for Anglo-specific knowledge, our Cross-lingual Local Question Answering (QA) task is specifically designed to measure the presence of local and culture-specific knowledge in Large Language Models (LLMs), as well as its generalisation across languages. For this purpose, a hand-crafted dataset was created, containing question templates locally adapted to seven different localities: Ethiopia, The Netherlands, UK, Germany, India, Mexico, and Spain. These questions were then translated into the corresponding languages: 'am-ET', 'nl-NL', 'en-GB', 'de-DE', 'hi-IN', 'es-MX', 'es-ES', resulting in 49 QA pairs per general template. The effort extends beyond the traditional Anglo-centric focus, aiming to offer a broader and more inclusive examination of LLMs' ability to handle localised information from various cultural contexts.
Authors
Checklist:
genbench-cli test-task
tool.