We have predictors with a rather high AUC score with a "naive" (over a merged dataset) cross-validation. It seems that the score is nearly the same with and without treatments, which is discouraging in the context of precision medicine. However, we have not tried the following simple test: take the predictive model, which input includes treatments, and try maximizing positive outcomes considering treatments as a free variable. That is, we have real data with some treatments and outcomes with some survival rate and such. We try to improve this survival rate by virtually choosing different treatments. It may turn out that the predictor is completely independent of the treatment, but who knows...
We have predictors with a rather high AUC score with a "naive" (over a merged dataset) cross-validation. It seems that the score is nearly the same with and without treatments, which is discouraging in the context of precision medicine. However, we have not tried the following simple test: take the predictive model, which input includes treatments, and try maximizing positive outcomes considering treatments as a free variable. That is, we have real data with some treatments and outcomes with some survival rate and such. We try to improve this survival rate by virtually choosing different treatments. It may turn out that the predictor is completely independent of the treatment, but who knows...