mlr-org / mlr3pipelines

Dataflow Programming for Machine Learning in R
https://mlr3pipelines.mlr-org.com/
GNU Lesser General Public License v3.0
137 stars 25 forks source link

Classbalancing pipe operator when used with resample #729

Closed jpconnel closed 1 year ago

jpconnel commented 1 year ago

Hello! I was hoping for some clarification on the intended order of operations when using the 'classbalancing' pipe operator and resample() function through mlr3.

I am performing a binary logistic regression using a large dataset, and would like to downsample my major class while performing training (and ideally testing as well).

When I perform downsampling outside of the 'resample' function, I get the expected result (major class frequencies equal to 5 times the minor class).

The following yields results as desired with the class balancing pipe operator

opb <- po('classbalancing', adjust = 'downsample', reference = 'minor', ratio = 5) %>>% po("encode", method = "treatment") %>>%
  po("scale") %>>% lrn("classif.cv_glmnet", predict_sets = c("test", "train"))
opb$keep_results <- TRUE
opbLearner <- as_learner(opb)
opbResult <- opbLearner$train(task_lr)
opbResult$graph$pipeops$classbalancing$.result$output

Graph data from classbalancing has dimensions of 702 x 16 as desired

However, when I resample the same graph learner, the output of the class balancing pipe operator appears to be resampled first (before the entre dataset is downsampled)

resampleTest <- mlr3::resample(task = task_lr, opbLearner, resampling = rsmp("subsampling", ratio = 0.7, repeats = 1))
resampleTest$learners[[1]]$graph$pipeops$classbalancing$.result$output

Graph data from classbalancing has dimensions of 2541608 x 16

Is there a way to do ensure the downsampling is done before resampling? One of the purposes of also doing the resampling is to consider the effect of the downsampling

mb706 commented 1 year ago

I think I misunderstood your question at first and typed out a response that does not answer it. However, I am going to post it as a reference for people who may come across this and wonder why the dimension of the $.result is so much larger in the second code snippet.


What is happening here is that resample() runs the Graph twice, once for training and once for predicting / inference on the "test" set. Since .result always contains the result from the last invocation of the Graph, you are seeing the classbalancing output of the $predict() call. Note that, during prediction, PipeOpClassBalancing does not modify the data at all, since the whole pipeline is required to make one prediction for each input sample. Removing samples during prediction would break this.

If you want to see the state of the Graph after training but before prediction in a resampling iteration, you can currently

debugonce(mlr3:::workhorse)

and then run your resample() call and step until after the line learner = learner_train(learner$clone(), ..... Here you will notice that learner$graph$pipeops$classbalancing$.result$output has around 491 rows (0.7 * 702), but with some random variation, since this depends on the number of minor class samples that make it into the training set.

I see that this workaround is a bit tedious, and I will think about making this more convenient in https://github.com/mlr-org/mlr3pipelines/issues/730.

mb706 commented 1 year ago

To get at your question,

Is there a way to do ensure the downsampling is done before resampling? One of the purposes of also doing the resampling is to consider the effect of the downsampling.

You could call the downsampling PipeOp manually, e.g. using

task_lr_down <- po("classbalancing", ...)$train(list(task_lr))[[1]]

and then call resample() with that. We are thinking about making this invocation more convenient in the future.

However, make sure you know what you are doing here, methodology-wise! (I am not sure how aware you are of the following, so if you know this then sorry for wasting your time). You use resampling to try and estimate the performance of a machine learning training-inference-workflow (consisting of preprocessing like subsampling, feature encoding etc., and finally fitting + inference of the "Learner"-model) on a given dataset. Subsampling the data and then running "resample()" would therefore answer the question of "what if I ran my method in a world where the data was more evenly balanced", not "how does subsampling influence the performance of my method". E.g. if you measure accuracy and use the lrn("classif.featureless") (majority prediction), you will get a an accuracy of (I think) around 0.999954, using your resample() call above with the extremely imbalanced dataset. If you create task_lr_down and resample on that ("downsampling before resampling"), your resulting accuracy will be around 0.833... . I would call the different numbers not the "effect of downsampling", but the "effect of having more or less imbalanced data" (simulated through downsampling). Methodologically things can get much worse when you do other preprocessing operations "before" resampling (e.g. imputation), since these risk leaking information about your test set into the training set. It can give you a severely optimistic bias.

jpconnel commented 1 year ago

Thank you for the detailed description of what is going on and the meaning of .result - that makes a lot more sense now!

I hadn't thought about performing the pipe operation beforehand on the task - that is a great solution. In the mean time I found a work around by performing downsampling outside mlr3 and generating a ResampleResult using as_result_data() - but the solution described above is much cleaner

And thank you for the comments and suggestions regarding methodology - especially with imbalanced datasets (as you showed) very important to keep in mind throughout.