Metrics that require predicted item ids (e.g. ItemCoverageAt, PopularityBiasAt, NoveltyAt) are not supported in model fit() and evaluate().
Currently, they can be used only after model.fit() is called, like the following example
The metrics cannot currently be included in the PredictionTask because they require to receive in update_state() the predicted item ids instead of the prediction scores (y_pred) and labels (y_true), as usual.
Have tested the metrics locally following the example from the unit test and they do work properly when been called after model.fit() and after exporting to topk recommender. I am doing something like this:
Try to pass one of the following metrics ItemCoverageAt(), PopularityBiasAt(), NoveltyAt() as metrics for any PredictionTask. It is going to raise an Exception, because typical metrics receive in update_state() prediction scores (y_pred) and labels (y_true), but these metrics require the top item ids recommended
Expected behavior
We should be able to set those metrics together with the other ranking metrics for the PredictionTask.
In the future, when #368 is merged, we should be able to provide different sets of metrics for model.fit() and model.evaluate(), include these ones
Bug description
Metrics that require predicted item ids (e.g. ItemCoverageAt, PopularityBiasAt, NoveltyAt) are not supported in model fit() and evaluate().
Currently, they can be used only after model.fit() is called, like the following example
The metrics cannot currently be included in the
PredictionTask
because they require to receive inupdate_state()
the predicted item ids instead of the prediction scores (y_pred) and labels (y_true), as usual. Have tested the metrics locally following the example from the unit test and they do work properly when been called aftermodel.fit()
and after exporting to topk recommender. I am doing something like this:Steps/Code to reproduce bug
ItemCoverageAt(), PopularityBiasAt(), NoveltyAt()
as metrics for anyPredictionTask
. It is going to raise an Exception, because typical metrics receive inupdate_state()
prediction scores (y_pred) and labels (y_true), but these metrics require the top item ids recommendedExpected behavior
We should be able to set those metrics together with the other ranking metrics for the
PredictionTask
. In the future, when #368 is merged, we should be able to provide different sets of metrics for model.fit() and model.evaluate(), include these ones