lyst / lightfm

A Python implementation of LightFM, a hybrid recommendation algorithm.
Apache License 2.0
4.73k stars 691 forks source link

Can I employ item "masking" in model evaluation to eliminate cold start scenarios? #605

Open RationallyPrime opened 3 years ago

RationallyPrime commented 3 years ago

What I want to do is generate a train/test split that splits off random user/item interactions into the test in such a way that if that'd leave any given user without any interactions in the training data they'll simply be dropped from the model entirely or disregarded for the purposes of calculating the evaluation metrics. Is this supported somehow?

If not, how hard would that be to do?