Open lthoang opened 3 months ago
Couple of things:
for MMNR model, what does it mean by using different external data for validation and test? can we have a specific example?
@tqtg, If we look closely at this function. MMNR uses different history matrices for train data, validation data, and test data.
My idea is to breakdown the evaluation pipeline into train, validation, and test to reduce the redundancy of the current implementation (currently, cornac allows manipulating val_set along with train_set inside fit
function, it may cause re-evaluate val_set once or twice inside score
or rank
functions)
We provide models with both val_set
and train_set
so that it can perform early stopping, hyper-parameter optimization, or any kind of trade-off for model selection inside the training loop. I don't get your very last sentence about score
and rank
functions.
@tqtg let's say we evaluate val_set
inside fit
for early stopping/monitoring. After the training is done and we perform evaluation on VALIDATION, the model has to do inference on the val_set
again.
Description
This is a breaking change. In many training/validation/test pipeline, the evaluation on validation data happens during training (for monitoring/model selection). The current version of cornac evaluates validation set similar to test data.
Expected behavior with the suggested feature
n
epochs and report the validation results.Other Comments