PreferredAI / cornac

A Comparative Framework for Multimodal Recommender Systems
https://cornac.preferred.ai
Apache License 2.0
845 stars 138 forks source link

[FEATURE] Allow running evaluation on validation set during training or right after training is done #613

Open lthoang opened 3 months ago

lthoang commented 3 months ago

Description

This is a breaking change. In many training/validation/test pipeline, the evaluation on validation data happens during training (for monitoring/model selection). The current version of cornac evaluates validation set similar to test data.

Expected behavior with the suggested feature

Other Comments

tqtg commented 3 months ago

Couple of things:

lthoang commented 3 months ago

for MMNR model, what does it mean by using different external data for validation and test? can we have a specific example?

@tqtg, If we look closely at this function. MMNR uses different history matrices for train data, validation data, and test data.

My idea is to breakdown the evaluation pipeline into train, validation, and test to reduce the redundancy of the current implementation (currently, cornac allows manipulating val_set along with train_set inside fit function, it may cause re-evaluate val_set once or twice inside score or rank functions)

tqtg commented 3 months ago

We provide models with both val_set and train_set so that it can perform early stopping, hyper-parameter optimization, or any kind of trade-off for model selection inside the training loop. I don't get your very last sentence about score and rank functions.

lthoang commented 3 months ago

@tqtg let's say we evaluate val_set inside fit for early stopping/monitoring. After the training is done and we perform evaluation on VALIDATION, the model has to do inference on the val_set again.