scikit-learn-contrib / imbalanced-learn

A Python Package to Tackle the Curse of Imbalanced Datasets in Machine Learning
https://imbalanced-learn.org
MIT License
6.82k stars 1.28k forks source link

n_neighbors inconsistency #601

Open MattEding opened 5 years ago

MattEding commented 5 years ago

Description

All the following classes use n_neighbors:

Whereas k_neighbors is used with SMOTE and all its variants.

This poses a problem with duck-typing and pipelines.

from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV

from imblearn.pipeline import Pipeline
from imblearn.over_sampling import ADASYN
from imblearn.over_sampling import SMOTE

X, y = ...

smote = SMOTE()
adasyn = ADASYN()
logreg = LogisticRegression()

smote_pipe = Pipeline([('sampler', smote), ('classifier', logreg)])
adasyn_pipe = Pipeline([('sampler', adasyn), ('classifier', logreg)])

params = dict(sampler__n_neighbors=range(3, 6))
smote_grid = GridSearchCV(smote_pipe, params)
adasyn_grid = GridSearchCV(adasyn_pipe, params)

# fails due to k_neighbors instead of n_neighbors
# I am forced to make a new params dict
smote_grid.fit(X, y)

# succeeds
adasyn_grid.fit(X, y)

Expected Results

SMOTE would benefit using n_neighbors to have consistent API.

Versions

Darwin-18.7.0-x86_64-i386-64bit Python 3.7.3 | packaged by conda-forge | (default, Jul 1 2019, 14:38:56) [Clang 4.0.1 (tags/RELEASE_401/final)] NumPy 1.17.1 SciPy 1.3.1 Scikit-Learn 0.21.3 Imbalanced-Learn 0.5.0

glemaitre commented 5 years ago

I see. Could make sense. It would take 2 versions for the deprecation. However, you still have some other neighbors params in the smote variants as well. It could also be an issue.

You could always create you grid on the fly:

for pipe in [smote_pipe, adasyn_pipe]:
    neighbors_params_name = [p for p in pipeline.get_params().keys() if 'neighbors' in p]
    params = {p: range(3, 6) for p in neighbors_params_name}
    gs_pipe = GridSearchCV(pipe, params)
    gs_pipe.fit(X, y)
MattEding commented 5 years ago

I would argue that the extra m_neighbors parameters in SVMSMOTE and BorderlineSMOTE have different meaning than the n/k_neighbors found in other algorithms (and themselves). The n/k_neighbors are used only for finding neighbors, whereas m_neighbors looks to me that its usage is for flagging samples as 'danger' or 'noise'.

I know this is a minor issue that has simple workarounds, but I felt that it was worth marking as an issue nonetheless.

glemaitre commented 4 years ago

We could think about modifying this in 1.X since that we will have more freedom to break the API

MattEding commented 4 years ago

Additionally, I recently noticed the inconsistency also occurs with self.nn_ vs self.nn_k_ for non-SMOTE and SMOTE repsectively.

rola93 commented 4 years ago

hey! come here from #680

Thanks for your answer.

I know it's more or less complex and need some time for this cycle (waiting for two releases) but, is it going to start?

Thanks