Closed Blanch-Font closed 5 years ago
One reason I've refrained from doing that is that I don't think propensity scores are useful output. They shouldn't be used to assess covariate balance because you can just do that on the covariates directly, and because of propensity score tautology, they aren't needed after estimating the weights. Often the propensity score is nonsensical after estimating the weights. With the GBM. CBPS, and SuperLearner methods, there's not a guarantee the propensity scores for each individual will add up to 1. I think it's a good idea, but I believe in moving away from the propensity score, which is why I prefer entropy balancing and optimization-based weighting to other weighting methods; these methods don't estimate a propensity score and just seek balance directly.
I think your idea is a good one, but because it goes against my beliefs about best practices, I'm not going to implement it. You can use include.obj = TRUE
in the call to weightit()
to have the original fit objects (e.g., the GBM objects from twang) returned in the weightit output.
I think a great improvement is included a data.frame in 'ps' field with the probability of each treatment. Something like in glm and predict(model, type = 'response').