Closed dscolby closed 7 months ago
This would not necessarily be applicable to interrupted time series ana lysis because we are trying to see how much an omitted predictor would change counterfactual predictions and therefore the estimated effect size. In this case, I think it is better to directly simulate an omitted predictor or a bunch of them.
Also, that way of calculating E-values is not correct.
ITS conducts sensitivity analysis by generating random variables and re-estimating the causal effect. Instead, we should implement E-values, as proposed by VanderWeele, Tyler J., and Peng Ding. "Sensitivity analysis in observational research: introducing the E-value." Annals of internal medicine 167, no. 4 (2017): 268-274. Besides ITS, we should implement this as a test of confounding/exchangeability for all estimators.
The steps to implement are:
Start with the observed effect estimate (e.g., mean difference or average treatment effect) from your study, denoted as MEAN_OBS.
Calculate the lower and upper confidence limits (LL and UL) for your observed effect estimate (MEAN_OBS) based on the confidence interval from your statistical analysis.
To calculate MD_U (Minimum Strength for Upper Limit):
Start with the UL (Upper Confidence Limit) of the mean difference.
Mathematically, you can set up the following equation:
MD_U * MEAN_OBS = UL
Solve for MD_U:
MD_U = UL / MEAN_OBS
To calculate MD_L (Minimum Strength for Lower Limit):
Start with the LL (Lower Confidence Limit) of the mean difference.
Set up the following equation:
MD_L * MEAN_OBS = LL
Solve for MD_L:
MD_L = LL / MEAN_OBS
It's common for researchers to consider an E-value greater than 2 as indicative of a relatively strong association, meaning that unmeasured confounding would need to be at least twice as strong as the observed effect to explain it away. However, this is a heuristic, and the specific threshold for what is considered "high" may vary based on the field and the judgment of the researchers involved in a given study.