maciejkula / spotlight

Deep recommender models using PyTorch.
MIT License
2.97k stars 421 forks source link

Precision recall score calculation wrong #147

Open CooperBond opened 5 years ago

CooperBond commented 5 years ago

Hello! I use spotlight Explicit factorization model to solve my task and wile evaluating results found out that precision and recall score were calculated wrong. I suppose to remake method with using threshold. because I didn't find an option to set it. Old method good for Implicit feedback by maximizing rated items. New method: def precision_recall_score(model, interactions, threshold, k=10): interactions = interactions.tocsr() if np.isscalar(k): k = np.array([k]) precision = [] recall = [] for user_id, row in enumerate(interactions): if not len(row.indices): continue predictions = -model.predict(user_id) predictions = np.argsort(predictions) targets = np.argwhere(row.toarray() >= threshold)[:, 1] user_precision, user_recall = zip(*[ _get_precision_recall(predictions, targets, x) for x in k ]) precision.append(user_precision) recall.append(user_recall) precision = np.array(precision) recall = np.array(recall) return precision, recall If I'm wrong, sorry. Hope for reply, thank you !

lsatterfield commented 5 years ago

@CooperBond Could you please describe a bit more about why you think the precision recall was incorrect in the original version?

lsatterfield commented 5 years ago

@CooperBond I have a suspicion that you may be correct because with the package method my mean average precision and mean average recall are always equal. Which doesnt seem correct.