I have a question that is likely not a bug, but perhaps a technical feature of cross-validation by caret. In the minimal example below, I demonstrate (using mtcars) that identical data submitted with two different labels (A,B) in a single dataframe results in Cross-Validated Model Scores that are substantially higher in the Control label (A) compared to the Case Label (B). I had hoped that for this negative control experiment that the AUC would approximately be 0.5.
Is there a reason for this behavior?
This bias toward AUC < 0.5 is observed when using other approaches such as logistic regression. It is also observed when randomizing the input data to training and when significantly adding identical data to the negative control dataset. It is also observed when using loocv or boot. Adding a small amount of noise to the data does not change the outcome much either.
However, adding a small signal to distinguish B from A correctly generate AUCs above > 0.5, and CV Scores with appropriate directionality.
The results make me think that there is some type of tie mechanism in place in the absence of signal to choose the Control label (A).
Thank you for your time.
Minimal, reproducible example:
Minimal dataset:
library(caret)
library(pROC)
library(randomForest)
set.seed(12345)
# ----
# input ml data is mtcars repeated exactly twice
# and given A and B labels for CV training
mlInputData = rbind(mtcars,mtcars)
mlInputData$varOfInterest = c(rep("A",nrow(mtcars)), rep("B",nrow(mtcars)))
I have a question that is likely not a bug, but perhaps a technical feature of cross-validation by caret. In the minimal example below, I demonstrate (using mtcars) that identical data submitted with two different labels (A,B) in a single dataframe results in Cross-Validated Model Scores that are substantially higher in the Control label (A) compared to the Case Label (B). I had hoped that for this negative control experiment that the AUC would approximately be 0.5.
Is there a reason for this behavior?
This bias toward AUC < 0.5 is observed when using other approaches such as logistic regression. It is also observed when randomizing the input data to training and when significantly adding identical data to the negative control dataset. It is also observed when using
loocv
orboot
. Adding a small amount of noise to the data does not change the outcome much either.However, adding a small signal to distinguish B from A correctly generate AUCs above > 0.5, and CV Scores with appropriate directionality.
The results make me think that there is some type of tie mechanism in place in the absence of signal to choose the Control label (A).
Thank you for your time.
Minimal, reproducible example:
Minimal dataset:
Minimal, runnable code:
Plot 1:
Plot 2: