ModelOriented / modelStudio

📍 Interactive Studio for Explanatory Model Analysis
https://doi.org/10.1007/s10618-023-00924-w
GNU General Public License v3.0
323 stars 32 forks source link

Why are Break Down plot intercepts the same for different regression models? #114

Closed bgu1997 closed 1 year ago

bgu1997 commented 1 year ago

I'm using glmnet to create an elastic net regression and a LASSO regression. I take the resulting model and create an explainer object, and then use the explainer object in modelStudio. My question is why are the intercepts of the break down plots for the two models the same (20.091)? I have manually checked the intercept for the two models using coef(model$finalModel, model$bestTune$lambda), and they are different, so I was not expecting for both intercepts to be 20.091.

`#modelStudio with Enet for mtcars library(DALEX) library(modelStudio) library(caret) library(glmnet)

data(mtcars)

set.seed(42)

cluster_num <- 10

cl <- makePSOCKcluster(cluster_num) registerDoParallel(cl)

model <- train( form = mpg ~ ., data = mtcars, method = 'glmnet', trControl = trainControl(method = 'LOOCV'), tuneLength = 100 )

stopCluster(cl)

explainer <- explain(model, data = mtcars, y = mtcars$mpg, label = 'Elastic Net')

modelStudio(explainer, mtcars, facet_dim = c(1,1), options = ms_options(ms_title = ''))`

`#modelStudio with LASSO on mtcars data(mtcars)

set.seed(42)

cluster_num <- 10

cl <- makePSOCKcluster(cluster_num) registerDoParallel(cl)

model <- train( form = mpg ~ ., data = mtcars, method = 'glmnet', trControl = trainControl(method = 'LOOCV'), tuneGrid = expand.grid(alpha = 1, lambda = seq(0, 100, by = 1)) )

stopCluster(cl)

explainer <- explain(model, data = mtcars, y = mtcars$mpg, label = 'LASSO')

modelStudio(explainer, mtcars, facet_dim = c(1,1), options = ms_options(ms_title = ''))`

hbaniecki commented 1 year ago

Hi @bgu1997, intercept is the mean prediction on the inputted dataset (or a subset of it). Running your code gives indeed (almost) the same intercepts, see:

library(DALEX)
library(modelStudio)
library(caret)
library(glmnet)

data(mtcars)

set.seed(42)

model <- train(
  form = mpg ~ .,
  data = mtcars,
  method = 'glmnet',
  trControl = trainControl(method = 'LOOCV'),
  tuneLength = 100
)

explainer <- explain(model,
                     data = mtcars,
                     y = mtcars$mpg,
                     label = 'Elastic Net')

mean(predict(explainer$model, explainer$data)) # 20.09063

set.seed(42)

model2 <- train(
  form = mpg ~ .,
  data = mtcars,
  method = 'glmnet',
  trControl = trainControl(method = 'LOOCV'),
  tuneGrid = expand.grid(alpha = 1,
                         lambda = seq(0, 100, by = 1))
)

explainer2 <- explain(model2,
                     data = mtcars,
                     y = mtcars$mpg,
                     label = 'LASSO')

mean(predict(explainer2$model, explainer2$data)) # 20.09062
bgu1997 commented 1 year ago

I apologize I can't include my own data because there is patient information contained. I looked at the output for mean(predict(explainer$model, explainer$data)) for my two models. The intercept for the LASSO modelStudio plot matches the mean prediction for the LASSO model. However, the intercept for the Elastic Net modelStudio plot is the same as that of the LASSO, yet the mean prediction returned by mean(predict(explainer$model, explainer$data)) is different from that of LASSO.

hbaniecki commented 1 year ago

As per documentation https://modelstudio.drwhy.ai/articles/ms-perks-features.html#more-calculations-means-more-time, modelStudio has a default parameter of N=300 where 300 observations are sampled from explainer$data for faster explanation estimation.

Perhaps pass a subset of data, e.g. 500 rows to explain() and then use modelStudio(..., N=NULL) to use all data inputted to the explainer. Check if now mean(predict(explainer$model, explainer$data)) equals intercept ?

hbaniecki commented 1 year ago

It would be easier to debug the ibreakdown::break_down() function https://modeloriented.github.io/iBreakDown/reference/break_down.html, not the whole modelStudio(), if you find a reproducible example on one observation of interest.

bgu1997 commented 1 year ago

By 300 observations does this mean it is agnostic of the dimensions of the data? My data only has 50 rows but >300 columns. How should I perform the test with explain() and modelStudio(..., N = NULL) in my scenario?

Sorry I'm an amateur data scientist, and I very much appreciate your help debugging.

hbaniecki commented 1 year ago

@bgu1997, unfortunately, I didn't understand your goal.

Can you reproduce the potential bug on one of the openly-available datasets?

bgu1997 commented 1 year ago

Sorry for the delay, I wasn't able to recreate the bug on openly-available datasets. I will reach out again if further assistance is needed. Thank you very much for answering my questions up to this point!

hbaniecki commented 1 year ago

No problem, best tag me @hbaniecki here or submit a new issue