Closed qjiang002 closed 1 year ago
This was actually a potential problem revealed by recent changes, which is caused by the fact that the final report does not have explicit information of mappings between analysis levels and performances.
I think it would be essentially better to give a unique name for every metric:
default_metric_configs: dict[str, MetricConfig] = {
"example_foo": FooConfig(...),
"block_foo": FooConfig(...),
}
And then a specific analysis level name is used to choose a set of metrics:
level_to_metrics: dict[str, list[str]] = {
"example": ["example_foo", ...],
"block": ["block_foo", ...],
}
return {k: default_metric_configs[k] for k in level_to_metrics[level]}
Anyway, I will fix this by changing Result.overall
to a dict.
I found that some meta-analysis code is unable to be fixed quickly. Since it heavily relies on the order of the original list.
I think meta_analyses
directory is not tested appropriately at all and it doesn't work for now.
@neubig
I proposed #534, which doesn't include fixes for meta_analysis.
Some tasks may use the same metric on different analysis levels. Although they metric functions at different levels are different, they share the same name, and this leads to the same metric names in the report's overall performance. Such tasks are NER and argument pair extraction (APE).
This may cause problem in sorting the systems using the metric score. When changing list[(name, thing)] to dict[name]=thing, analysis levels should be one level above metric name in issue #491 .
NER default metrics
NER analysis report
APE default metrics
APE analysis report