massquantity / LibRecommender

Versatile End-to-End Recommender System
https://librecommender.readthedocs.io/
MIT License
378 stars 65 forks source link

Are there any plans to make a training history and Early Stopping possible? #109

Open lukas-wegmeth opened 2 years ago

lukas-wegmeth commented 2 years ago

I have read about these topics in older issues but I wanted to revive them here.

I am currently trying to (automatically) tune some of the Tensorflow networks and it is not easy to do so without a training history or Early Stopping.

I realize that Tensorflow 1.x makes this harder for the developer, but there should be a way. These techniques are standard for Deep Learning and they would be extremely useful for this library to have.

massquantity commented 2 years ago

Sorry no plan for this. For now the top priority is adding some new algorithms.

lukas-wegmeth commented 2 years ago

I understand. I hope it will be possible one day. Thanks!

adebiasio-estilos commented 1 year ago

I'm interested in early stopping and saving best model automatically too for hyperparameters tuning!

Have you found a way (or at least a starting point) to implement these functionalities in these months?

I will try the solution posted here https://github.com/massquantity/LibRecommender/issues/69 . I should implement early stopping for BPR, FM, DeepFM and NGCF

massquantity commented 1 year ago

@adebiasio-estilos Sorry still no plan... If you can implement these features, PRs are welcomed.

lukas-wegmeth commented 1 year ago

I'm interested in early stopping and saving best model automatically too for hyperparameters tuning!

Have you found a way (or at least a starting point) to implement these functionalities in these months?

I will try the solution posted here #69 . I should implement early stopping for BPR, FM, DeepFM and NGCF

I eventually figured adding regular early stopping functionality would be too much work. Instead, I trained for more than enough epochs and then performed retrospective "early-stopping" by saving model states and losses after each epoch. It's not the best way since it wastes computation time, but it saved development time since I only needed it for one experiment, and was good enough for me.

And I also started from the issue you mentioned.

anto-d commented 1 year ago

+1 for the training history :) maybe @lukas-wegmeth could you share how you saved model states and losses for each epoch (like you mention in your comment, for tensorflow models I guess)? Thanks in advance!

lukas-wegmeth commented 1 year ago

+1 for the training history :) maybe @lukas-wegmeth could you share how you saved model states and losses for each epoch (like you mention in your comment, for tensorflow models I guess)? Thanks in advance!

It has been a while but IIRC I used the "save" functions from the base class of algorithms wherever I needed it (e.g. during fit after each epoch). You have to edit the source code to make this work. However, I don't know if this will still work like this since the library has changed since then.