scikit-learn / enhancement_proposals

Enhancement proposals for scikit-learn: structured discussions and rational for large additions and modifications
https://scikit-learn-enhancement-proposals.readthedocs.io/en/latest
BSD 3-Clause "New" or "Revised" License
48 stars 34 forks source link

SLEP needed: fit_transform does something other than fit(X).transform(X) #12

Open amueller opened 5 years ago

amueller commented 5 years ago

This is required for stacking and leave-one-out target encoders if we want a nice design. We already kind of do this in some places but don't have a coherent contract. I think a slep would be nice.

raamana commented 5 years ago

A FitTransformMixin common to all estimators would solve it perhaps?

amueller commented 5 years ago

How so? The point is whether we can adjust the contract / user expectations. Either we need to redefine the contract or we need to provide an interface. The implementation is the easy part (that's probably true for all the SLEPs)

jnothman commented 5 years ago

In terms of "we kind of do this in some places", the handling of training points in kneighbors/radius_neighbors is a fair example of something similar.

The places we have encountered such transforms include StackingTransformer ( https://github.com/scikit-learn/scikit-learn/pull/8960) and NearestNeighborsTransformer ( https://github.com/scikit-learn/scikit-learn/pull/10482). Both could be redesigned as meta-estimators, but would lose some conceptual simplicity.

We could also handle this case more explicitly by implementing fit_resample where the estimator can use transform at test time.

amueller commented 4 years ago

I just realized that this is the place where I should have put https://github.com/scikit-learn/scikit-learn/issues/15553

Basically what MLR does is that fit returns the training-time version of transform which basically solves the problem without introducing a new verb/method.

However, method chaining is so idiomatic in sklearn that I don't really see how to get around this. We could also "just" introduce a different verb/method say forward that does training time transformation and returns X and y that's used in meta-estimators (mostly pipeline).

That would also solve #15.

amueller commented 4 years ago

actually SLEP 001 (#2) basically implements something like this, only SLEP 001 also has a test-time version of it, which I think is a bit weird, or rather it conflates two issues: distinguishing training vs test time, and returning a complex object / tuple.

amueller commented 3 years ago

There's a relevant discussion happening here: https://github.com/scikit-learn/enhancement_proposals/pull/15#issuecomment-623008833

Maybe it's fair to say that there are three extensions to transform we're considering: 1) doing something that destroys the sample-wise alignment of input and output 2) doing something different on training and test data 3) returning something other than the transformed data (i.e. targets etc).

We're doing all three in SLEP 5, but with a focus on doing non-sample-aligned transformations, and only allowing somewhat trivial versions of 2 and 3.

We need some more elaborate version of 2) for the target encoder (that I've brought up many times now lol): https://github.com/scikit-learn/scikit-learn/pull/17323

SLEP 1 tries to do all of the together, I think which might not be necessary (we couldn't come up with good examples).

We could achieve stacking and target transformers and more straight-forward handling in the neighbor graphs by:

cc @GaelVaroquaux who would like to move this forward :)

amueller commented 3 years ago

Good point that @GaelVaroquaux brought up that was probably also in the other thread: we don't want to allow fit_resample in a ColumnTransformer but we want to allow the target encoder.

jnothman commented 3 years ago

Yes, I now think sample-aligned transformation is the key difference from resampling.