cognizant-ai-labs / covid-xprize

Open-source repository containing examples and documentation for the Cognizant XPRIZE Pandemic Response Challenge
Other
37 stars 76 forks source link

Refine predictions validation #37

Closed ofrancon closed 4 years ago

ofrancon commented 4 years ago

Following a discussion with @EKMeyerson : we need to validate the Predictors' predictions in a few more ways:

Note: in order to work, the *ip.csv file should contain ALL NPIs since 2020-01-01, to guarantee NO GAP in NPIs. Call 2020-01-01 the inception date. Hopefully these validations makes the "contract" clearer for participants. We could provide these validations as some kind of unit tests Note: cases are NOT provided by the API's params. Models have to store locally whatever they can before cut-off date, aka submission date and their loss of internet connection.

  1. 4 days right after "submission date", which means the model has a access to the data up to start_date -1 We already have one similar, except for the IP file that should contain all NPIs since inception:

    !python predict.py -s 2020-08-01 -e 2020-08-04 -ip ../../validation/data/2020-01-01_2020-08-04_ip.csv
  2. 1 month in the future For instance:

    !python predict.py -s 2021-01-01 -e 2021-01-31 -ip ../../validation/data/2020-01-01_2021-01-31_ip.csv
  3. 1 month in the past, but with different NPIs (counterfactuals)

    !python predict.py -s 2020-04-01 -e 2020-04-30 -ip ../../validation/data/2020-01-01_2020-04-30_ip.csv

    That can give us interesting counterfactuals. For instance, what would have happened if each npi was +1 stricter? what if -1 stricter? => Interesting for qualitative checks. Also to explain things like 70% of the predictors say case would have been 50% less if NPIs had been +1 stricker, 20% say 75% less, 10 say +25% more (for instance)

  4. 6 months in the future Assuming 6 months is our maximum prediction horizon. Maybe explicitly say 180 days max. Maybe range rather than horizon.

    !python predict.py -s 2020-08-01 -e 2021-01-28 -ip ../../validation/data/2020-01-01_2021-01-28_ip.csv

    Rationale: the ip file contains the actual IPs as long as they are known, and after that some scenarios like 'frozen' NPIs. I'd like to use that Note: we also need to validate that the prediction is done under a time limit of 1 hour . This one hour is totally arbitrary for the moment. We should discuss what it corresponds to (in terms of sandbox seconds per region per day of prediction for instance)

  5. Single region

    !python predict.py -s 2020-08-01 -e 2020-08-04 -ip ../../validation/data/2020-01-01_2020-08-04_Italy_ip.csv

    the Italy IP would contain NPIs for Italy only, and we should validate that we get predictions for Italy only.

  6. Multiple regions

    !python predict.py -s 2020-08-01 -e 2020-08-04 -ip ../../validation/data/2020-01-01_2020-08-04_USA_ip.csv

    Here the USA IP would contain USA + the 50 states.

ofrancon commented 4 years ago

@babakatwork @ristopm @dsargent @EKMeyerson let us know what you think about these validations. I think they should end up in the guidelines in one form or another. That should make expectations clearer for participants. Things like the maximum prediction horizon (6 months) as well as the time limit (1 hour) are arbitrary. We should discuss them but we should provide such numbers in the guidelines. These requirements might be challenging, but ... it's an XPrize after all.

ofrancon commented 4 years ago

One validation we dropped: checking that the predicted daily new cases is not bigger than the region's population size. Participants should realize that. They'll obviously lose points (credibility really) if their model 'explodes' in such a way.

ofrancon commented 4 years ago

Also note that none of our example models passes such validation checks at the moment.

ofrancon commented 4 years ago

Revisiting validation after a few changes were made. The most important scenario to test is:

Change to the validation checks: