Closed doctorpangloss closed 5 years ago
Hello, BPRMF does perform a brief update, however I would perform additional measurements to see whether everything is tuned correctly before deploying it for anything practical ...
Hint: Feel free to discuss further questions here: mymedialite@googlegroups.com
Thanks, I think maybe consider throwing NotImplementedException
for the ones that definitely do not do incremental updates? Or document otherwise?
Your docs mention incremental rating prediction as a more efficient way to add data to an existing model:
The question is: do any models actually do a brief update procedure, or do they all do a complete re-training in practice?
Here's my take, looking at implementations only and assuming one additional rating is added to the given model:
WRMF
: Yes,Optimize
is only called twice, once on the user and once on the item, and only a single row from the data matrix is fetched in theOptimize
call. Also,RetrainItem
andRetrainUser
are definitely actually called byAddFeedback
.MostPopular
: Yes, but that's not saying much.BPRMF
: Hard to Say, there's something very fishy going on with thenum_item_iterations
calculation inRetrainItem
that obscures, to me, whether it's actually doing an update, but it appears an effort to make it more efficient was made.KNN
and its inheritors: Hard to Say.FoldIn
appears to be more efficient, but theRetrainUser
/RetrainItem
implementations seem to do a full retraining because every item/user is revisited and the same calls as done in regular training are made.SLIM
and its inheritors: No, becauseAddFeedback
isn't overridden andRetrainItem
is never called.MatrixFactorization
and its inheritors: No, becauseAddRatings
base implementation retrains every user and item and no class overrides this method.