Closed stevehadd closed 4 years ago
we will need to do some hyperparameter tuning to get the best results: https://www.kaggle.com/prashant111/a-guide-on-xgboost-hyperparameters-tuning https://blog.cambridgespark.com/hyperparameter-tuning-in-xgboost-4ff9100a3b2f
With the refactoring required to accomodate gradient boosted trees, we should rerun all the classification notebooks and experiment definitions to check that they still work. The refactoring also allows us to use the scikit-learn gradient boosted trees implementation.
XG boost represents the current state of the art in decision tree ensembles. It has a scikit-learn like interface, so should be easy to integrate into the current workflow. tutorial: https://machinelearningmastery.com/develop-first-xgboost-model-python-scikit-learn/