Open nickreich opened 7 years ago
From Craig's email about scoring: "Most of the code I use is part of an R package I developed with Jarad Niemi at Iowa State last fall. You can download the package from GitHub at https://github.com/jarad/FluSight.
If your entries are not already in R you can read in CSVs using the ‘read_entry’ function. The ‘verify_entry’ function can be used to make sure each entry is in the correct format to be scored. You can create a copy of the truth by feeding data from the attached week 28 CSVs into the ‘create_truth’ function, and then expand the values scored as correct using the ‘expand_truth’ function. The ‘score_entry’ function can then be used to score individual entry files."
Seems like we should make the following changes in our function that's named something like get_submissions_via_sampled_trajectories:
consider sending a suggestion for a revision to CDC contest guidelines about how rounding is done.
something went wrong in Region 6. Our models said week 2 was peak, CDC said week 52 was peak. baseline was 4.1. EW52 observation was 4.19, EW01 observation was 4.07, others were clearly above 4.1.