uber / causalml

Uplift modeling and causal inference with machine learning algorithms
Other
4.87k stars 756 forks source link

Validating model via predictions #732

Open cheecharron opened 5 months ago

cheecharron commented 5 months ago

Hello. I am analyzing trial results in which there are four treatments. Although the modeled results return contrasts between three of the treatments and a designated control group, I would like to generate estimates of the outcome for each treatment so that I can validate the predictions against the observed values in the holdout sample. A similar issue (#290) was raised before for generating predictions with two treatment groups, and the following code was forwarded by @Thomas9292 for generating predictions:

# Create holdout set
X_train, X_test, t_train, t_test, y_train, y_test_actual = train_test_split(df_confounder, df_treatment, target, test_size=0.2)

# Fit learner on training set
learner = XGBTRegressor()
learner.fit(X=X_train, treatment=t_train, y=y_train)

# Predict the TE for test, and request the components (predictions for t=1 and t=0)
te_test_preds, yhat_c, yhat_t = learner.predict(X_test, t_test, return_components=True)

# Mask the yhats to correspond with the observed treatment (we can only test accuracy for those)
yhat_c = yhat_c[1] * (1 - t_test)
yhat_t = yhat_t[1] * t_test
yhat_test = yhat_t + yhat_c

# Model prediction error
MSE = mean_squared_error(y_test_actual, yhat_test)
print(f"{'Model MSE:':25}{MSE}")

# Also plotted actuals vs. predictions in here, will spare you the code

Apparently the above code generates predicted outcomes for the treatment group (yhat_t) and control group (yhat_c). When I apply this code to my data, yhat_t returns three vectors, which I assume correspond to predictions for each of the three non-control treatments. However, yhat_c also returns three vectors. Do the three yhat_c vectors represent three different predictions for the control groups? Whatever the case, how might I generate predictions for each treatment, including the control group?

t-tte commented 4 months ago

It's expected that the T-Learner returns a number of control vectors corresponding to the number of treatments. This is because the implementation simply loops over each treatment and estimates a separate model for it. So the yhat_cs are the predicted control outcomes from each of those separate models. To get the predicted control outcome for the units in each of the three treatment groups, you need to apply a similar masking as in the code snippet that you provided, for each of the three control vectors. So, for the units in the first treatment group, you select the control observations from the first control vector. And so on for the remaining pairs.

cheecharron commented 4 months ago

Thanks for your response, @t-tte. I see that I can generate predictions for each of the treatment groups and even for the control group for the T-learner, but it looks like I cannot generate predictions for the control group for other meta-learners (although I can generate predictions for the treatment groups for all but R-learners). Is it possible to generate predictions for outcomes for the treatment groups for an uplift tree? That is, I'd rather have the prediction for the raw outcome for each treatment than an uplift score for each treatment so that I can compare to observed values.

ras44 commented 2 weeks ago

hi @cheecharron as you mentioned, you can get conditional response surface predictions for T-learners via the return_components argument:

https://github.com/uber/causalml/blob/a0315660d9b14f5d943aa688d8242eb621d2ba76/causalml/inference/meta/tlearner.py#L143

For the S-Learner, it's similar: https://github.com/uber/causalml/blob/a0315660d9b14f5d943aa688d8242eb621d2ba76/causalml/inference/meta/slearner.py#L94

For the R meta learner, it's not as much about predicting the treatment or control conditional response curves, but directly calculating the CATE through the Robinson's transformation, which uses the conditional mean outcome and the propensity score (see https://arxiv.org/pdf/1712.04912). So in this case, you could calculate the counterfactual for each of your treatment and control cases by adding/subtracting the predicted CATE, but the R learner does not actually model the "factual" surfaces. And so, it doesn't produce a prediction that you can compare to an observed outcome as described in https://github.com/uber/causalml/issues/290.

For the DR meta learner, the situation is similar (ref equations 2 and 3 in https://arxiv.org/pdf/2004.14497).

Regarding outcome predictions for an uplift tree, see the full_output argument at: https://github.com/uber/causalml/blob/a0315660d9b14f5d943aa688d8242eb621d2ba76/causalml/inference/tree/uplift.pyx#L2519