prio-data / prediction_competition_2023

Code for generating benchmark models and evaluation scripts for the 2023 VIEWS prediction competition
4 stars 5 forks source link

Hierarchical reconciliation #20

Open kvelleby opened 1 year ago

kvelleby commented 1 year ago

We have a competition with independent models providing predictions as two different levels of aggregation. It would be interesting to see how well the (best) country-level predictions compare to the (best) grid-level predictions aggregated to the country-level. Which models are best calibrated to the country-level aggregated outcomes (I would assume the country-level models), and what are the differences in calibration?

Adding methods to automatically cast grid-level predictions to the country-level would make it very easy to compare models across aggregation levels.

This means, however, that we would also need to (be able to) evaluate country-level models using only the subset of countries that are part of the grid-level analysis.