Open Sheng154 opened 1 year ago
when performing causal inference (e.g. ATE), we calculate the expected difference between the target variable state under control and treatment interventions. However, difference between 2 discrete states may not be a meaningful quantity. For instance, you can take a continuous variable and bin it. But you could permute the bin ID (discrete state) assigned to each bin.
One thing that probably differentiates the causal inference module of causalAI vs other libraries is our in-house method (causal path method) for estimating the causal effects as opposed to methods like backdoor or frontdoor methods. In practice, we found that our estimator has much less variance and was therefore able to work better with a lower sample complexity.
Hi, The tutorial suggests that the target variable has to be a continuous variable when doing causal inference. I wonder if there are any alternative ways to do inference on discrete data (both intervention and target variables are discrete)? Also I've learned that most Bayesian network applications would prefer to deal with discrete data where continuous data are discretised first before learning. What makes this package different from similar ones such as causalnex or pgmpy that it is especially efficient in causal inference with continuous data? Thanks.