When creating a custom task.py , the test_task.py script fails unless there is an existing option present in the task_type and evaluation_metrics fields of the config file.
The task_type issue can be solved by adding a new enum value to the TaskType class in the api.py script.
The evaluation_metric issue raises an AssertionError issue for the EvaluationMetricConfig if 'hf_id' or 'best_score' is None. Therefore if there is no huggingface metric for a custom task (which is the case in our submission), there is no way to add a new metric
Hi,
When creating a custom task.py , the test_task.py script fails unless there is an existing option present in the task_type and evaluation_metrics fields of the config file.
The task_type issue can be solved by adding a new enum value to the TaskType class in the api.py script.
The evaluation_metric issue raises an AssertionError issue for the EvaluationMetricConfig if 'hf_id' or 'best_score' is None. Therefore if there is no huggingface metric for a custom task (which is the case in our submission), there is no way to add a new metric