Closed sch401 closed 1 year ago
TPOT uses sklearn.model_selection.KFold for regression and sklearn.model_selection.StratifiedKFold for classification. This is returned by check_cv in this line of the estimator, which returns an unshuffled splitter with the number of splits set by the cv parameter. Optionally, you can input your own splitter object, for example: cv = sklearn.model_selection.StratifiedKFold(n_splits=10, shuffle=True, random_state=42)
The data that gets passed into the fit command is then split according to the cv parameter, and the cross-validation score (computed by cross_val_score) is computed for each pipeline.
The example you posted uses the same strategy described in your link. As shown in the figures, the full data is split into Train and Test sets. Then, the Train set is split into n-folds for calculating the cross-validation fold.
Whether or not you pass in your entire dataset to TPOT or create a separate test set is up to you and depends on what you are trying to do. The held-out test set may be useful for evaluating the final pipeline and comparing performance with different pipelines.
It is important to note that TPOT can become very good at overfitting the CV score itself (particularly for small datasets). Meaning that the final pipeline could have a high CV score that generalizes poorly to held-out data (or even different cv shuffles of the same data). The held-out data is one way of estimating out-of-sample performance.
Thank you so much.
Hello. Recently, I use TPOT to optimize the machine learning model. I am confused by the data splitting when the cross validation is introduced. The offical example is:
It can be seen that the original database is firstly split using
train_test_split
. If I want to introduce cross validation in TPOTRegressor, the data used for cross validation is only the training setX_train
andy_train
.However, the official user guide of scikit-learn shows that the whole dataset is split into training and testing set when applying cross validation. https://scikit-learn.org/stable/modules/cross_validation.html
Should I remove the code
X_train, X_test, y_train, y_test = train_test_split(housing.data, housing.target, train_size=0.75, test_size=0.25, random_state=42)
and input the dataset directly into TPOTRegressor when using cross validation?