izxi / Learning

9 stars 3 forks source link

Overfitting problem? #14

Closed izxi closed 6 years ago

izxi commented 6 years ago

If you meet an overfitting problem in you model, which of the following methods can help you improve the model?

Select one or more:

izxi commented 6 years ago

overfit

izxi commented 6 years ago

overfitting-in-machine-learning

Cross-validation Cross-validation is a powerful preventative measure against overfitting. In standard k-fold cross-validation, we partition the data into k subsets, called folds. Then, we iteratively train the algorithm on k-1 folds while using the remaining fold as the test set (called the “holdout fold”).

Train with more data It won’t work everytime, but training with more data can help algorithms detect the signal better.

Remove features Some algorithms have built-in feature selection. For those that don’t, you can manually improve their generalizability by removing irrelevant input features.

Early stopping When you’re training a learning algorithm iteratively, you can measure how well each iteration of the model performs.

Regularization Regularization refers to a broad range of techniques for artificially forcing your model to be simpler.

Ensembling Ensembles are machine learning methods for combining predictions from multiple separate models. There are a few different methods for ensembling, but the two most common are: Bagging attempts to reduce the chance overfitting complex models.

izxi commented 6 years ago

DSL

johnnieng commented 5 years ago

Not A, B, C, D