mlr-org / mlr3proba

Probabilistic Learning for mlr3
https://mlr3proba.mlr-org.com/
GNU Lesser General Public License v3.0
128 stars 20 forks source link

Add Survival -> Classification Pipeline #11

Closed RaphaelS1 closed 10 months ago

RaphaelS1 commented 5 years ago

Survival -> Classification, with arguments:

fkiraly commented 5 years ago

One thing to think about: the transformer makes sense in isolation, but when applied to the target of a survival model, it's a reduction wrapper.

The question here is: should the transformer and the reduction wrapper (application to target) be separate?

The case for "yes" would be that different transformers can be appended to the target to make a classifier out of a survival model.

fkiraly commented 5 years ago

Short question, since I'm not fully understanding what this design is doing: how would this be applied as the "adaptor" piece? I.e., what are arguments of constructor, what is the scitype of the result?

RaphaelS1 commented 5 years ago

Short question, since I'm not fully understanding what this design is doing: how would this be applied as the "adaptor" piece? I.e., what are arguments of constructor, what is the scitype of the result?

I can't answer this yet as I'm not familiar with mlr3pipelines, I think we should come back to this once I've had more time to familiarise myself with the current design

fkiraly commented 5 years ago

what I mean: just in theory, leaving mlr3pipelines aside.

What is input, what is output, to the wrapper/compositor?

From your first post, it looks like input = classifier, output = survival model.

Do you not mean: reduction from classification to survival (and not from survival to classification), in which case you would get a classifier from time thresholding the target?

RaphaelS1 commented 5 years ago

This is a semantic issue that we often disagree over. But yes I mean the reduction that solves the survival task (hence "from") with classification (hence "to")

fkiraly commented 5 years ago

This is a semantic issue that we often disagree over.

Which way reduction goes? "from" the problem you solve "to" the problem you solve it by. I still remember my CS prof criticizing me repeatedly for using it the wrong way round...

But if that's the way you mean, the direction seems incompatible with your class/API design, or at least it's incomplete.

My original question being, how precisely are you trying to reduce to classification? There doesn't seem to be just one way to translate predicted classification probabilities to a survival function.

RaphaelS1 commented 5 years ago

My original question being, how precisely are you trying to reduce to classification? There doesn't seem to be just one way to translate predicted classification probabilities to a survival function.

And I think this returns to the discussion about how pipelines works. In my mind this would look something roughly like

Survival Task -> Surv2ClassTransformer -> Classification Task -> Classification Learner -> Classification Measure

Or depending on the reduction

Survival Task -> Surv2ClassTransformer -> Classification Task -> Classification Learner -> Composition to Survival -> Survival Measure

Where Surv2ClassTransformer is a generic parent transformer to:

  1. Classifier chain: Predict censoring then dead at X days/weeks/months/etc.
  2. Classifier chain: Predict dead then censoring at X days/weeks/months/etc.
  3. Rolling windows for every possible day/week/month/etc.

Of course there are many more but the point being that the user selects which transformer and thus which reduction they want to use. The key is then to have a parallel step in the pipeline that returns the predictions back in a meaningful sense (although this wouldn't work in 1. and 2. above but does in 3.)

fkiraly commented 5 years ago

I think we need to be very careful about the types, and whether we are operating on tasks or learners.

Numbers 1 and 2 seem to change the task - you start with a survival task, then you end up with a (hierarchical or multi-target) classification task.

Number 3, as far as I understand, doesn't change the survival task, but leverages a probabilistic classification learner to solve it by bin-wise prediction.

Formally, only number 3 is "reduction", in the usual definition of solving task A by a simple (low-complexity) modification of a solution to task B, where task A = survival and task B = classification.

In terms of API, reduction naturally maps onto a wrapper or compositor of learners, i.e., a first-order model building operation.

Number 1 and number 2 are less clear, and I'm not sure whether the mlr3 API knows this as a generic concept, or whether this has been fixed in design documents.

There's two "natural" ways to turn this into API design that I can see: (i) operating on the level of task/data. A survival task is turned into a classification task. (ii) operating on the level of the method. The method operates on the survival task, but predicts "incomplete information" about the outcome. It would be a predict type corresponding to predicting the binned probability etc, with task parameters where the bin is. There would have to be special losses for this special predict type, which in this case are just the classification losses, but coded to work in the context of the "reduced task".

I think number (i) is simpler in terms of API, since it requires only an operation on the existing task model. Number (ii) requires changes and extension in terms of task interface and metric interface.

fkiraly commented 5 years ago

Addendum: I think only number 3 is compatible with your "reduction" style interface suggestion. I don't see how numbers 1 and 2 would map onto that, so maybe you need to further explain or we need to further discuss.

RaphaelS1 commented 5 years ago

I think we should return to this once we start looking at pipelines in more detail. It may make sense to start with simpler workflows (e.g. boosting) before implementing reductions

RaphaelS1 commented 4 years ago

Update: On hold until multi-label classif implemented