prio-data / prediction_competition_2023

Code for generating benchmark models and evaluation scripts for the 2023 VIEWS prediction competition
4 stars 5 forks source link

To produce 'outcome' instead of 'ged-sb' in the output parquet files #15

Closed noorains closed 1 year ago

noorains commented 1 year ago

Change save_actuals() function in the BenchmarkModels.py

use actuals['outcome'] instead of actuals['ged_sb']

image

kvelleby commented 1 year ago

benchmark code moved out of repository