Closed elray1 closed 1 year ago
I think both avenues need to be pursued. The first would address the (vaguely stated) hypothesis that choosing forecasts based on training data allocation scores (rather than post hoc on realized scores) leads to higher expected utility of forecast-informed allocation decisions than choosing them based on say WIS. And the second would address the dependency of any inferences about the first type of hypothesis on larger scale epidemic-phase type characteristics. But both avenues probably would need the code to be sped up by recycling allocation levels as initial iterates for nearby constraint levels that I mentioned a while back...
I don't have a clear vision of what the right way to display final results will be; may need some exploration. Roughly, the question I have in mind is whether kind of results we have seen based on our single-date exploration hold over longer time spans.
r.e. computation:
might it make sense to split this up into two separate tasks? maybe there is some code refactoring that should be tried first and then two new analyses that could be separate, right?
Note that this is now addressed in part by work in #12
Currently just one date is used in the application. Is your suggestion to run the analysis that is currently being run for multiple dates and then combine the results together by say summing or taking the average of all of the alloscores? Or would we keep the analysis separate by date?