As discussed f2f, we will use the 365 day fits to these 5 locations to get the empirical mean and stdev (and logmean and logsd for infection feedback) for the R(t) RW step size and infection feedback. We will then use these as priors for the model run fit to 90 days with or data (first for this same subset of 5 locations) across multiple (6) forecast dates. Compare performance to baseline v0.1.0, and if improved, run full pipeline.
This "retrospective tuning" is going to require a bit of reframing of the analysis/manuscript. A few options:
hold out the locations we used for tuning from the analysis (treat this as our training data, the rest of the locations as our testing data). Has some cons in comparability to Hub models (but could also exclude these states?). Also cons that the states chosen feel a bit arbitrary
keep them in, and be very explicit about the tuning process we used, and reframe analysis as an evaluation of the current version of the wwinference package model, and explain that the real-time cfa-renewalww submission differed in the following ways (dif't vintaged data, manual exclusion of hosp data points, human review of model to submit, and notably different model structure and different priors (and put this model structure in the MS)). Point to git history to see how model changed.
Results of fitting to a longer time series here https://github.com/cdcent/cfa-forecast-renewal-ww/issues/774
As discussed f2f, we will use the 365 day fits to these 5 locations to get the empirical mean and stdev (and logmean and logsd for infection feedback) for the R(t) RW step size and infection feedback. We will then use these as priors for the model run fit to 90 days with or data (first for this same subset of 5 locations) across multiple (6) forecast dates. Compare performance to baseline v0.1.0, and if improved, run full pipeline.
This "retrospective tuning" is going to require a bit of reframing of the analysis/manuscript. A few options:
wwinference
package model, and explain that the real-timecfa-renewalww
submission differed in the following ways (dif't vintaged data, manual exclusion of hosp data points, human review of model to submit, and notably different model structure and different priors (and put this model structure in the MS)). Point to git history to see how model changed.