stanfordmlgroup / ngboost

Natural Gradient Boosting for Probabilistic Prediction
Apache License 2.0
1.65k stars 215 forks source link

Return train and val loss #42

Closed TSFelg closed 4 years ago

TSFelg commented 4 years ago

Thank you for the excellent work with NGBoost, really excited to having been testing it out!

In commit c4b46b9 the fit method was altered to return self instead of the train and val losses. Is there any way to access the losses with the current behavior?

I believe the losses should be accessible, because we may not be interested in doing early stopping but actually training for a longer number of iterations and simply chose the best val loss.

Also, returning the losses is essential to compare different models.

avati commented 4 years ago

Hi @TSFelg , we are trying to maintain the API to be as similar to sklearn as possible. The behavior of sklearn is to return self at the end of .fit(), and we are trying to do the same. I'm trying to understand how sklearn enables returning list of and training and val losses and making ngboost work similarly. Thoughts?

paantya commented 4 years ago

maybe use "Return train and val loss " like in base-XGBoost?

TSFelg commented 4 years ago

Good point @avati.

The sklearn implementation of xgboost has a method called evals_result which returns the train and validations losses. This can be seen here: https://github.com/dmlc/xgboost/blob/a4f5c862760029c24a5ba29b2a2ef4787058856c/python-package/xgboost/sklearn.py

alejandroschuler commented 4 years ago

I think this commit should address the issue. Please have a look at let me know. It should work like:

...
ngb.fit(X_train, Y_train)
ngb.evals_result