Open jeremybastin1207 opened 2 years ago
The effect of folding presents when only positive examples are used to calculate the loss, for example WALS in matrix factorisation. The article suggests strategies like negative sampling to prevent this from occurring.
In the case of a Retrieval mode fit with TFRS, in-batch negative sampling is used, which should avoid any folding. And it is in fact not possible to include explicit negative feedback into the retrieval model, just the ranking model.
If the offending item/category is fairly rare then it's likely not to be sampled as a negative very often, so some folding may occur. If this is the case, you could try implementing mixed negative sampling, which blends in-batch negatives with negative sampled uniformly at random from your candidate corpus.
Thanks for the comprehensive answer, Patrick!
Hi,
Thanks for your great library and tutorials !
In a previous blog post from Google, a phenomenon known as folding was mentioned for recommender systems that only use positive feedbacks. Reference: https://developers.google.com/machine-learning/recommendation/dnn/training?hl=en.
After training a retrieval model with only positive feedbacks, I suspect some item-to-item recommendations to encounter folding. First results seem at first glance great excepted for one item for which some results mismatch with a completely different category of items that doesn't make sense.
My first question is how can we really know we are facing this issue ? Is there a technique or a metric ?
In the blog post, they recommend to use negative feedbacks to prevent this phenomenon. Without having negative feedbacks in my dataset, is there a way to generate negative feedbacks ? Is it a good practice to include negative feedbacks in a retrieval model or should I use a ranking model ?
Thanks, Jérémy