There is no fixed seed used in the tool so running the analysis can result in different answers. In particular, the ML outcome models can produce different answers.
[ ] Add text to alert the users that this is the case
[ ] Possibly look into using bootstrapping (or other method) to get interval on the causal estimate.
Are their any cases where a result could flip between being significant and not just based on the outcome model?
There is no fixed seed used in the tool so running the analysis can result in different answers. In particular, the ML outcome models can produce different answers.
Are their any cases where a result could flip between being significant and not just based on the outcome model?