EpistasisLab / tpot

A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.
http://epistasislab.github.io/tpot/
GNU Lesser General Public License v3.0
9.74k stars 1.57k forks source link

Question: How is the data split using cross validation #1326

Closed sch401 closed 1 year ago

sch401 commented 1 year ago

Hello. Recently, I use TPOT to optimize the machine learning model. I am confused by the data splitting when the cross validation is introduced. The offical example is:

from tpot import TPOTRegressor
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split

housing = load_boston()
X_train, X_test, y_train, y_test = train_test_split(housing.data, housing.target,
                                                    train_size=0.75, test_size=0.25, random_state=42)

tpot = TPOTRegressor(generations=5, population_size=50, verbosity=2, random_state=42)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('tpot_boston_pipeline.py')

It can be seen that the original database is firstly split usingtrain_test_split. If I want to introduce cross validation in TPOTRegressor, the data used for cross validation is only the training set X_train and y_train.

However, the official user guide of scikit-learn shows that the whole dataset is split into training and testing set when applying cross validation. https://scikit-learn.org/stable/modules/cross_validation.html

Should I remove the codeX_train, X_test, y_train, y_test = train_test_split(housing.data, housing.target, train_size=0.75, test_size=0.25, random_state=42) and input the dataset directly into TPOTRegressor when using cross validation?

perib commented 1 year ago

TPOT uses sklearn.model_selection.KFold for regression and sklearn.model_selection.StratifiedKFold for classification. This is returned by check_cv in this line of the estimator, which returns an unshuffled splitter with the number of splits set by the cv parameter. Optionally, you can input your own splitter object, for example: cv = sklearn.model_selection.StratifiedKFold(n_splits=10, shuffle=True, random_state=42)

The data that gets passed into the fit command is then split according to the cv parameter, and the cross-validation score (computed by cross_val_score) is computed for each pipeline.

The example you posted uses the same strategy described in your link. As shown in the figures, the full data is split into Train and Test sets. Then, the Train set is split into n-folds for calculating the cross-validation fold.

Whether or not you pass in your entire dataset to TPOT or create a separate test set is up to you and depends on what you are trying to do. The held-out test set may be useful for evaluating the final pipeline and comparing performance with different pipelines.

It is important to note that TPOT can become very good at overfitting the CV score itself (particularly for small datasets). Meaning that the final pipeline could have a high CV score that generalizes poorly to held-out data (or even different cv shuffles of the same data). The held-out data is one way of estimating out-of-sample performance.

sch401 commented 1 year ago

Thank you so much.