We discussed in the meeting today that batch integration feature looks strange as a ranking with only 1 metric. While all gene outputs are evaluated in all other metrics, these metrics are stored in different subtasks. To get a comprehensive picture, there should be one overall ranking of methods with combines information from all subtasks.
This ranking table should should:
Use scaled values per subtask
Take the mean of all metrics that are computed for this method (even if not all metrics can be computed as the method is not applicable in all subtasks)
Make it identifiable how many metrics were averaged for this method in the final table (e.g., by colour coding).
This is ideally solved by adding the hierarchical structure of the task-subtask folders to the website, and doing this aggregation there.
@scottgigante-immunai @rcannood @danielStrobl any thoughts?
We discussed in the meeting today that batch integration feature looks strange as a ranking with only 1 metric. While all gene outputs are evaluated in all other metrics, these metrics are stored in different subtasks. To get a comprehensive picture, there should be one overall ranking of methods with combines information from all subtasks.
This ranking table should should:
This is ideally solved by adding the hierarchical structure of the task-subtask folders to the website, and doing this aggregation there.
@scottgigante-immunai @rcannood @danielStrobl any thoughts?