fabsig / KTBoost

A Python package which implements several boosting algorithms with different combinations of base learners, optimization algorithms, and loss functions.
Other
60 stars 19 forks source link

Adding tweedie loss #10

Closed davidlkl closed 3 years ago

davidlkl commented 3 years ago
  1. Adding tweedie loss, similar to the already added poisson loss
  2. Fixed some bugs when early stopping is enabled
  3. Change the way of early stopping From: Store the score of last (n_iter_no_change) iterations, if the current iteration does not produce a better score than any of them, stop. Return the current iteration To: Store the best score, if there are consecutively (n_iter_no_change) iterations which do not produce better score than the best score, stop. Return the the best iteration

The file change does not reflect correctly due to white space. So you will need to check the option "Hide whitespace changes" in the comparing section before reviewing.

fabsig commented 3 years ago

Thank you, davidlkl.

It is hard to see what is new in your commits as the diffs (e.g. this) show all code as new (green highlighted). Could you make regular commits such that only your newly added and modified code shows up as new. Otherwise it is very difficult for me to review the code. In particular, given that we don't have unit tests for this library.

davidlkl commented 3 years ago

Hi fabsig,

image

Could you try enabling this option?

fabsig commented 3 years ago

Now it works! Thanks.

davidlkl commented 3 years ago

Thanks!

davidlkl commented 3 years ago

Sorry forgot to remove Line 1955 before the pull request. print(i, n_iter_no_improve, best_iter, self.val_score_[i], best_val_score)

This is used for my debugging only.

fabsig commented 3 years ago

No problem. I will remove it.

fabsig commented 3 years ago

I have removed it now in the newly released version. Also, I made a few minor other changes (checks for y >=0, added comments).