neulab / explainaboard_web

MIT License
8 stars 2 forks source link

Benchmark containing undefined datasets cannot open #553

Open qjiang002 opened 1 year ago

qjiang002 commented 1 year ago

Issue from PR #540

Load this benchmark: https://dev.explainaboard.inspiredco.ai/benchmark?id=globalbench_ner

I got this error when loading this benchmark:

[0]   File "/Users/jiangqi/Desktop/Capstone/explainaboard_web/backend/src/gen/explainaboard_web/impl/db_utils/benchmark_db_utils.py", line 187, in <setcomp>
[0]     (x.dataset.dataset_name, x.dataset.sub_dataset, x.dataset.split)
[0] AttributeError: 'NoneType' object has no attribute 'dataset_name'

This is because this benchmark try to find all ner systems with 'system_query': {'task_name': 'named-entity-recognition'}, but there are systems with undefined/custom dataset, so their dataset is None. One way to deal with undefined datasets is to ignore undefined datasets in benchmark.

neubig commented 1 year ago

Yes, custom datasets should be ignored in benchmarks.