Open AlexiaJM opened 3 years ago
After re-running the example with multiple seeds, I see that BART consistently has CI that does not include 0, but TMLE sometimes get significant negative, sometime significant positive, and sometimes non-significant results. So it really seems that TMLE works very poorly even while using dbart. I am not sure why, but I guess this answers my question about which method to trust more. Maybe you should use that example as a strong argument for your method.
I'm not an expert in TMLE so I can't really speak to that, but I have noticed that the adjustments makes a pretty large difference in small sample sizes.
Additionally, after digging into the TMLE package I'm noticing a big difference in how the response is estimated even before the TMLE adjustment is done. Apparently, the use of SuperLearner is the issue, as it is not in the function tmle:::tmle.SL.dbarts.k.5
. Not that training MSE is anything to go by, but the result from the SL call on the (scaled data) inside tmle:::estimateQ
is 0.08, while for tmle.SL.dbarts.k.5
it is 0.003. A simple lm
gets 0.02, so something strange seems to be happening.
Thanks for looking into the example!
I am almost always getting non-significant results with bartCause and almost always getting very significant results with the tmle package (when only using dbarts in tmle). I have no idea which package to trust more (although literature says to trust BART more).
Let's look at a simple example; let say we use the example in the documentation of bartCause.
bartCause output
TMLE output
As you can see, what should be about the same approach (since both of them use dbarts with the same settings), give effect=1.7 with CI (-.5,4) for bartCause and effect=.84 with CI (.19,1.49).
1) Which method is more right? It seems like a big difference in estimate and CI. Is the causal effect of z zero in this case (since z seems to be caused by x)?
2) Do you have any idea why the results can be so different even if both packages use dbart?