asmahani / ordinal-boost

Gradient Boosting Ordinal Regression
MIT License
0 stars 0 forks source link

Potential new features #6

Open asmahani opened 1 month ago

asmahani commented 1 month ago
asmahani commented 1 month ago

Re line search, we implemented it for theta, but decided to skip it for gamma at the moment, which means we always use gamma=1.0. however, we will keep the learning rate / shrinkage parameter for the regression function, since otherwise it will rapidly overfit (i.e., when learning rate is fixed at zero).