Closed larryshamalama closed 5 months ago
Modelling of the unmeasured confounder leads to non-identifiable models, although this may not be a problem with longitudinal outcomes and a time-invariant unmeasured confounder.
Instead, the key idea would be to: "reparametrize the model in such a way that the distribution of the data is determined completely by a smaller collection of parameters for which $\sqrt{n}$-consistent estimation is expected".
Title of paper: A comparison of Bayesian and Monte Carlo sensitivity analysis for unmeasured confounding
Link: https://pubmed.ncbi.nlm.nih.gov/28386994/