fani-lab / Adila

Fairness-Aware Team Formation
3 stars 2 forks source link

2020, RecSys, The Connection Between Popularity Bias, Calibration, and Fairness in Recommendation #3

Open Rounique opened 2 years ago

Rounique commented 2 years ago

Title: The Connection Between Popularity Bias, Calibration, and Fairness in Recommendation Year: 2020 Venue: recSys

Fairness Definition: In this work a recommender system is considered unfair if the recommendations do not fairly represent the interest or tastes of one group of users while other groups receive recommendations that are consistent with their preferences. (we call a recommender system unfair if it has different levels of miscalibration for different user groups)

Popularity Bias: In general, rating data is skewed towards more popular products–a few popular items receive the majority of the ratings, while the other items receive considerably less ratings. Even while we know that popular things are popular for a reason, not every user is equally interested in these items. Users who are interested in less popular, specialized things may exist. The recommender system should be able to meet those users' needs as well.

Calibration: This work shows that popularity bias which is common in recommendation is one important factor that leads to miscalibration in recommendation. The results of experiments using two real-world datasets show that there is a connection between how different user groups are affected by algorithmic popularity bias and their level of interest in popular items.

Metric used: A metric called miscalibration has been used for measuring how a recommendation algorithm is responsive to users’ true preferences and considering how various algorithms may result in different degrees of miscalibration for different users.

What is a good recommender system: A recommender which is relevant to the user, is diverse, and also helps the user discover products that they would have not been able to discover in the absence of the recommender system(novelty).

Datasets: Two datasets have been used, one MovieLens 1M dataset which contains 1,000,209 anonymous ratings of approximately 3,900 movies made by 6,040 users, and the second dataset used is a core-10 Yahoo Movies1 which contains 173,676 ratings on 2,131 movies provided by 7,012 users.

Methods: Several recommendation algorithms including user-based collaborative filtering (U serKN N), item-based collaborative filtering (ItemKN N), singular value decomposition (SVD++), and biased matrix factorization (BMF ) to cover both neighborhood based and latent factor models.

Results: It can be seen that groups with the lowest average popularity (niche tastes) are being affected the most by the algorithmic popularity bias and the higher the average popularity of the group, the lesser the group is affected by the popularity bias. This shows how, unfairly, popularity bias is affecting different user groups

Future work: How mitigating algorithmic popularity bias can help to lower miscalibration and unfairness.