Open amueller opened 5 years ago
A FitTransformMixin
common to all estimators would solve it perhaps?
How so? The point is whether we can adjust the contract / user expectations. Either we need to redefine the contract or we need to provide an interface. The implementation is the easy part (that's probably true for all the SLEPs)
In terms of "we kind of do this in some places", the handling of training points in kneighbors/radius_neighbors is a fair example of something similar.
The places we have encountered such transforms include StackingTransformer ( https://github.com/scikit-learn/scikit-learn/pull/8960) and NearestNeighborsTransformer ( https://github.com/scikit-learn/scikit-learn/pull/10482). Both could be redesigned as meta-estimators, but would lose some conceptual simplicity.
We could also handle this case more explicitly by implementing fit_resample where the estimator can use transform at test time.
I just realized that this is the place where I should have put https://github.com/scikit-learn/scikit-learn/issues/15553
Basically what MLR does is that fit
returns the training-time version of transform which basically solves the problem without introducing a new verb/method.
However, method chaining is so idiomatic in sklearn that I don't really see how to get around this. We could also "just" introduce a different verb/method say forward
that does training time transformation and returns X and y that's used in meta-estimators (mostly pipeline).
That would also solve #15.
actually SLEP 001 (#2) basically implements something like this, only SLEP 001 also has a test-time version of it, which I think is a bit weird, or rather it conflates two issues: distinguishing training vs test time, and returning a complex object / tuple.
There's a relevant discussion happening here: https://github.com/scikit-learn/enhancement_proposals/pull/15#issuecomment-623008833
Maybe it's fair to say that there are three extensions to transform we're considering: 1) doing something that destroys the sample-wise alignment of input and output 2) doing something different on training and test data 3) returning something other than the transformed data (i.e. targets etc).
We're doing all three in SLEP 5, but with a focus on doing non-sample-aligned transformations, and only allowing somewhat trivial versions of 2 and 3.
We need some more elaborate version of 2) for the target encoder (that I've brought up many times now lol): https://github.com/scikit-learn/scikit-learn/pull/17323
SLEP 1 tries to do all of the together, I think which might not be necessary (we couldn't come up with good examples).
We could achieve stacking and target transformers and more straight-forward handling in the neighbor graphs by:
fit_resample
to do the training-set transformation. In that case I still think it's weird to call the method resample. In that case we'd also have the potential to modify y during training.cc @GaelVaroquaux who would like to move this forward :)
Good point that @GaelVaroquaux brought up that was probably also in the other thread:
we don't want to allow fit_resample
in a ColumnTransformer but we want to allow the target encoder.
Yes, I now think sample-aligned transformation is the key difference from resampling.
This is required for stacking and leave-one-out target encoders if we want a nice design. We already kind of do this in some places but don't have a coherent contract. I think a slep would be nice.