dscolby / CausalELM.jl

Taking causal inference to the extreme!
https://dscolby.github.io/CausalELM.jl/
MIT License
21 stars 0 forks source link

Change how the perturbations are calculated to test the counterfactual consistency assumption #74

Closed dscolby closed 1 month ago

dscolby commented 1 month ago

When testing the sensitivity of the model to the counterfactual consistency assumption, we assume the potential outcomes differ from the outcomes we observed in the dataset. We operationalize this by generating perturbed versions of the outcomes, which simulate the potential outcomes that are different from what we have observed, and then we re-estimate the model on this alternative data to see how much the estimates change. Right now we simulate violations to the counterfactual consistency assumption by adding some noise from N(0, σ(y)). The issue is that if there is a high variance in the outcome then we could be adding a number to an outcome that is larger than the outcome itself, which is probably not realistic and would make any result seem extremely sensitive to a violation of the counterfactual consistency assumption. Instead, we could generate noise that is drawn from a normal distribution with a given variance, multiplied by the actual outcome, and then added to it.

dscolby commented 1 month ago

This is of course only for continuous outcomes.

dscolby commented 1 month ago

For the discrete case we should also flip the inequality operator, otherwise, we would basically be making random data i.e. counterfactual_Y = ifelse.(rand() < dev, Float64(rand(min_y:max_y)), y) instead of counterfactual_Y = ifelse.(rand() > dev, Float64(rand(min_y:max_y)), y).

dscolby commented 1 month ago

Changed in commit cb0845f.