Open sbidari opened 3 days ago
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 100.00%. Comparing base (
8291c02
) to head (f13c04d
).
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
I couldn't get the 'timeseries' type plots (plot_lm and plot_ts) to work for the hospital admissions model. I suspect it has something to do with the padding as those plots work for the basic model implementation. The output from plot_lm would be similar to figure generated here.
I was not sure if we wanted multiple posterior predictive plots. I can include figures using plot_ppc if you'd like. @damonbayer
I couldn't get the 'timeseries' type plots (plot_lm and plot_ts) to work for the hospital admissions model. I suspect it has something to do with the padding as those plots work for the basic model implementation.
Can you demonstrate those plots working somewhere? In this tutorial or a different one.
I was not sure of we wanted multiple posterior predictive plots. I can include figures using plot_ppc if you'd like. @damonbayer
Yes, please.
I couldn't get the 'timeseries' type plots (plot_lm and plot_ts) to work for the hospital admissions model. I suspect it has something to do with the padding as those plots work for the basic model implementation.
Can you demonstrate those plots working somewhere? In this tutorial or a different one.
Ok! I will add demonstration of these to getting started tutorial.
I do not understand why the posterior predictive distribution for the hospital admissions model does not have the same dimension as the observation.
Am I missing something here? @damonbayer
Am I missing something here? @damonbayer
It is because of the padding and/or seeding. I would try padding in the data in InferenceData
with missing values for now. If that doesn't work, we can change the model a bit or manually modify the samples.
All of this will be handled better once we have time incorporated throughout the project.
That's what I thought too but this is for the model fit without padding shown below:
hosp_model.run(
num_samples=1000,
num_warmup=1000,
data_observed_hosp_admissions=dat["daily_hosp_admits"].to_numpy(),
rng_key=jax.random.PRNGKey(54),
mcmc_args=dict(progress_bar=False,num_chains=1),
)
Since we are not padding the observation data here, padding shouldn't come into play, right?
Since we are not padding the observation data here, padding shouldn't come into play, right?
It's just because of the seeding, then.
I do not understand why the posterior predictive distribution for the hospital admissions model does not have the same dimension as the observation.
Am I missing something here? @damonbayer
Try with the argument supplying the argument data_observed_hosp_admissions
instead of n_timepoints_to_simulate
I think it would be hosp_model.posterior_predictive(data_observed_hosp_admissions=dat["daily_hosp_admits"].to_numpy().astype(float)
.
You could save that as its own variable somewhere, since we have now used it twice.
Try with the argument supplying the argument
data_observed_hosp_admissions
instead ofn_timepoints_to_simulate
I think it would be
hosp_model.posterior_predictive(data_observed_hosp_admissions=dat["daily_hosp_admits"].to_numpy().astype(float)
.
This solves the dimension issue but now the posterior predictive samples are just observed data.
This solves the dimension issue but now the posterior predictive samples are just observed data.
Oops. We will need to change the model code.
I noticed for both prior_predictive
and posterior_predictive
if called using data_observed_hosp_admissions=dat["daily_hosp_admits"].to_numpy().astype(float)
argument then the predictive distributions returned is simply the observed data.
Add 'hdi_type' plot with equal tailed credible intervals to demonstrate posterior predictive distribution using arviz
(second try after closing previous one due to mangled commit history)