zachmayer / caretEnsemble

caret models all the way down :turtle:
http://zachmayer.github.io/caretEnsemble/
Other
226 stars 75 forks source link

Ensemble and stack wrong predictions #228

Closed iamdeit closed 1 month ago

iamdeit commented 7 years ago

I'm combining multiple models using caretEnsemble and caretStack, but when I try to predict the training (I know I should not), it seems that the model is doing the opposite. For example:

Training correct outcome:

  [1] yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
 [19] yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
 [37] yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
 [55] yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
 [73] yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
 [91] yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
[109] yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
[127] yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
[145] yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
[163] yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
[181] yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
[199] yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
[217] yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
[235] yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
[253] yes yes yes yes yes yes yes yes yes yes no  no  no  no  no  no  no  no 
[271] no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no 
[289] no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no 
[307] no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no 
[325] no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no 
[343] no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no 
[361] no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no 
[379] no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no 
[397] no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no 
[415] no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no 
[433] no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no 
[451] no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no 
[469] no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no 
[487] no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no 
[505] no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no 
[523] no  no

Training predictions

  [1] no  no  no  no  no  no  no  no  no  no  no  yes yes yes no  no  yes no 
 [19] no  yes no  yes no  no  no  no  no  no  no  yes no  no  no  yes no  no 
 [37] yes no  no  no  no  no  no  no  no  no  no  no  no  no  yes no  yes no 
 [55] no  no  no  no  no  no  no  no  no  no  no  yes no  no  no  no  no  no 
 [73] yes no  no  no  no  no  yes no  no  no  no  no  yes yes no  no  no  no 
 [91] yes no  no  no  no  no  no  no  no  yes no  no  no  no  no  yes no  yes
[109] no  yes no  no  no  no  no  no  no  no  no  yes no  no  no  no  no  no 
[127] no  yes no  yes no  no  no  no  no  no  no  no  no  no  no  no  no  yes
[145] no  no  no  no  no  yes yes no  no  yes yes no  no  no  yes no  no  no 
[163] yes no  no  no  no  no  no  no  no  no  yes no  no  yes no  no  yes no 
[181] no  no  yes no  no  no  no  no  no  no  no  no  no  no  no  no  no  yes
[199] no  no  no  yes no  no  yes no  no  no  yes no  yes no  no  no  no  yes
[217] no  no  no  no  no  no  no  no  yes yes no  no  no  no  no  no  no  no 
[235] no  yes yes no  no  no  no  no  no  no  no  no  no  no  no  no  no  no 
[253] no  no  yes yes no  no  no  yes yes no  no  yes yes no  yes yes no  no 
[271] yes yes yes no  yes yes yes yes yes yes yes no  yes no  yes yes yes yes
[289] yes no  yes no  yes yes yes yes yes yes yes yes yes yes no  no  yes yes
[307] yes yes yes no  yes yes yes yes no  yes yes yes yes no  yes yes yes yes
[325] yes yes no  no  yes yes yes yes yes yes yes yes yes yes yes yes yes yes
[343] yes yes yes yes yes yes yes no  yes yes yes yes yes yes yes yes yes yes
[361] yes yes no  yes yes yes yes yes yes yes yes yes yes yes no  yes yes no 
[379] no  yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes no  yes
[397] yes yes no  yes yes yes yes yes no  no  yes yes yes yes no  no  yes yes
[415] yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
[433] yes yes yes yes yes yes yes no  yes no  yes yes yes yes yes yes yes yes
[451] yes no  yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
[469] yes yes yes no  yes no  yes yes yes no  no  yes yes yes yes no  yes no 
[487] no  yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
[505] yes yes yes yes yes yes yes yes yes yes yes yes yes yes no  yes yes yes
[523] yes yes

It happens with any model combination. I dont know if there is something wrong with the predict for ensembles.

zachmayer commented 7 years ago

This is a general problem with factors, caret, and caret ensemble.

Essentially, we factor your response, and then treat the first class as positive. In this case, it means "no" gets treated as the positive class.

At some point I need to work on fixing this, but in the meantime just reverse the levels of the predictions.

iamdeit commented 7 years ago

I still don't understand the behavior. I just executed another experiment with a different dataset and now the results are:

Training correct outcome:

  [1] yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
 [19] yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
 [37] yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
 [55] yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
 [73] yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
 [91] yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
[109] yes yes yes yes yes no  no  no  no  no  no  no  no  no  no  no  no  no 
[127] no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no 
[145] no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no 
[163] no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no 
[181] no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no 
[199] no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no  no 
[217] no  no  no  no  no  no  no  no  no  no

Training predictions:

  [1] yes yes no  yes yes yes yes yes yes yes no  no  yes yes no  yes yes yes
 [19] yes yes yes yes yes no  yes yes yes yes yes yes yes yes yes yes yes yes
 [37] yes yes yes yes no  yes no  yes yes yes yes yes yes yes yes yes yes yes
 [55] yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes no  yes yes
 [73] yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes no  yes
 [91] yes no  yes yes yes yes yes yes yes yes yes yes no  yes no  yes yes yes
[109] yes yes yes yes yes no  no  no  no  yes no  no  no  no  no  no  no  no 
[127] no  no  no  no  yes no  no  no  yes no  no  no  no  no  no  no  no  yes
[145] no  no  no  no  no  no  no  no  yes no  no  no  no  no  no  no  no  no 
[163] no  no  yes no  no  no  no  no  no  no  no  no  no  no  no  no  no  no 
[181] no  no  no  no  no  no  no  yes no  no  no  no  no  no  no  no  no  no 
[199] no  yes no  no  no  no  yes no  no  no  yes no  no  yes no  no  no  no 
[217] no  yes no  no  no  no  no  no  no  no

What does the result depend on?

zachmayer commented 7 years ago

It turns out I don't understand the question.

Do you expect the predictions from the model to be perfect?

iamdeit commented 7 years ago

No no, you understood well my question in your first comment. I'm just saying that sometimes the predict classifies the data in the opposite class and sometimes no (like my second example). So I'm not sure when I should reverse the levels of the predictions.

zachmayer commented 7 years ago

Could be a good models vs a bad model, I dunno!

DLeirer commented 7 years ago

I don't think it's an issue of good vs. bad models. I had the same problem consistently. What I realized is that depending on how my data is cut, caretEnsemble either gets the class or the probability prediction wrong. By wrong, I mean the exact opposite of the consensus prediction (either probability or class) you would expect from the models the ensemble is built from.

At least that was my experience in version 2.0.0 Seems to work fine in the dev version.

NBRAYKO commented 7 years ago

experiencing the same problem, the class labels of the outputs are sometimes inverted. It was easy to spot since my classes are very imbalanced, but I could have easily overlooked it if they weren't.

Here's my predict output from a stacked model:


> # Predict from Stack model------
>     df_pred_temp <-   PredictedClass = predict(modStack,
+                              df_pred_clean,
+                              type = "raw",
+                              na.action = na.pass) %>% 
+       as.character() %>% 
+       data.frame("PredictedClass" = .)

> # Tabulate predicted classes (majority should be Completed)------
> df_pred_temp$PredictedClass %>% table
.
Completed   No.Show 
        9       239 

Very different output from submodel (modList$gbm)


> # Run prediction-------
>     df_pred_temp2 <-  PredictedClass = predict(modList$gbm,
+                              df_pred_clean,
+                              type = "raw",
+                              na.action = na.pass) %>% 
+       as.character() %>% 
+       data.frame("PredictedClass" = .)

> # Tabulate predicted classes of submodel (majority should be Completed)------
> df_pred_temp2$PredictedClass %>% table
.
Completed   No.Show 
      205        43 

In my experience this was only a problem with the ensembles, @zachmayer is it also possible with the predict function form basic caret? How can this be avoided or fixed? I guess code outcome as an integer instead of factor?

zachmayer commented 7 years ago

Can you try the dev version on github and see if you have the same problem?

NBRAYKO commented 7 years ago

Weird... same data, and I get

> table(df_pred_temp$PredictedClass)

Completed   No.Show 
      129       119 

I would have expected to get the result from my previous post, just with inverted labels. Would be curious to see what's different in the dev version, if there's a past issue or merge request you can point to. Thanks!

weilu commented 6 years ago

Was banging my head about the predictions and googling like crazy, and finally I'm here. Installing the dev version from github fixed it for me. @zachmayer can you release the dev version to cran?

bigfoot31 commented 5 years ago

Yes, I am also facing the same issue.

lawine90 commented 5 years ago

The same problem still occur. Does anyone have solution?

lawine90 commented 5 years ago

For a temporary expedient, i use code blow.

    # hp2.ens.mdl is "caretList" result and hp2.ens.stk is "caretStack" result.
    svmR <- predict(hp2.ens.mdl$svmRadial, hp2.test) %>% as.character
    ens <- predict(hp2.ens.stk, hp2.test) %>% as.character

    dig <- table(svmR, ens) %>% diaf %>% sum
    odiag <- sum(table(svmR, ens)[2,1], table(svmR, ens)[1,2])

    if(odig > dig){
      old <- c("X0", "X1")
      new <- c("X1", "X0")

      ens <- new[match(ens, old)]
    }
zachmayer commented 1 month ago

I think this issue was solved in 4.0, as I now use the base factor levels. Please make sure your target is a factor.

If you still experience this issue, please use the iris or Sonar dataset to write a minimal test case for it. Create a pull request adding your test case and I will fix it.

You favorite LLm can help write the case