As much as possible we will separate interpretation (#35) from validation.
Ask yourself: how useful is my model at explaining patterns in the data? Is there variability/uncertainty in the data that my model does not capture well?
Trends vs uncertainty. In regression the uncertainty = residuals.
Reporting/quantifying uncertainty.
Brief interlude on probability bayes vs frequentist.
Different sources of uncertainty: measurement uncertainty, fitting uncertainty. (in multilevel models you also have modelled uncertainty e.g. random effects). (We would have foreshadowed this in the intro)
We want to learn something general. Fitting is easy, prediction is hard. This importance notion underlies most model evaluation.
Overfitting: model complexity vs out-of-sample prediction. (variance) (regularization)
Underfitting: Not enough useful information (bias)
Carry-over M3's visuals and underly data.
But how do we assess out-of-sample error concretely?
Cross validation
Simulations (do they qualitatively match up to your data)
Estimation
We will need lots of figures, but I think the actual content of this section is straightforward. 16 hrs.
Description
As much as possible we will separate interpretation (#35) from validation.
Estimation
We will need lots of figures, but I think the actual content of this section is straightforward. 16 hrs.