Open kaitejohnson opened 2 months ago
What would be useful to start would be some higher level conceptual guidance. I tried looking at the bayesplot tutorial, but would really need to dig into it and the Bayesian statistics behind it to get much from it.
Practically, I'm planning on running the model inside a loop for a large number of dates and regions. So I don't necessarily need to know the nuts and bolts of why an individual model had problems, so much as to know what level of trust to have in that model: should I flag up that run in my loop to not be used for evaluation? Or is it still useful for some set of purposes?
Right now, I'm thinking about something like:
convergence_flag_df <- wwinference::get_model_diagnostic_flags(stan_fit_obj = fit$raw_fit_obj) if (max(convergence_flag_df) == 0){# add model fit to a list of successfully run models for CRPS scoring in a later step}
Is this being too quick to dismiss model runs that might be otherwise suitable? Are all four of the flags equally problematic? Right now the model in question is coming up TRUE for flag_too_many_divergences, but FALSE for everything else.
Goal
Current infrastructure produces flags for specific convergence criteria that can be adjusted by the user (with provided defaults). We should provide some documentation of how to troubleshoot a model that does fail to converge.
Context