Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow
I'm building a XGBRegressor() model to do time series forecasting with 96 rows of data. But after i tuned my model with grid search, i rarely have the testing error exceed the training error. The evaluation metric i use is RMSE. Can anyone tell me what i did wrong with my model and what should i do?
Closing, feel free to re-open if there's something more concrete we can work with. If you are suspicious about the model, you can dump it out with Booster.get_dump.
I'm building a XGBRegressor() model to do time series forecasting with 96 rows of data. But after i tuned my model with grid search, i rarely have the testing error exceed the training error. The evaluation metric i use is RMSE. Can anyone tell me what i did wrong with my model and what should i do?
This is my code