Closed imnotteixeira closed 3 years ago
Mentions multiple trust-based recommendations systems, including Personalized PageRank.
PPR satisfies Symmentry, Positive Response and Transitivity but not IIS (Independence of Irrelevant Stuff) and Neighborhood Consensus
These are the relevant axioms:
PPR basically allows edges to have weights, thus making some links (trust) be stronger than others and making the Random Walk choose some paths more frequently than others, based on that weight.
https://www.scopus.com/record/display.uri?eid=2-s2.0-49249126404&origin=resultslist&sort=plf-f&src=s&nlo=&nlr=&nls=&sid=e475b6f38553cce759403177cd1e0967&sot=a&sdt=a&cluster=scosubjabbr%2c%22COMP%22%2ct&sl=53&s=TITLE-ABS-KEY%28user+reputation+system+search+ranking+%29&relpos=24&citeCnt=153&searchTerm=
High-quality, personalized recommendations are a key feature in many online systems. Since these systems often have explicit knowledge of social network structures, the recommendations may incorporate this information. This paper focuses on networks that represent trust and recommendation systems that incorporate these trust relationships. The goal of a trust-based recommendation system is to generate personalized recommendations by aggregating the opinions of other users in the trust network. In analogy to prior work on voting and ranking systems, we use the axiomatic approach from the theory of social choice. We develop a set of five natural axioms that a trust-based recommendation system might be expected to satisfy. Then, we show that no system can simultaneously satisfy all the axioms. However, for any subset of four of the five axioms we exhibit a recommendation system that satisfies those axioms. Next we consider various ways of weakening the axioms, one of which leads to a unique recommendation system based on random walks. We consider other recommendation systems, including systems based on personalized Page Rank, majority of majorities, and minimum cuts, and search for alternative axiomatizations that uniquely characterize these systems. Finally, we determine which of these systems are incentive compatible, meaning that groups of agents interested in manipulating recommendations can not induce others to share their opinion by lying about their votes or modifying their trust links. This is an important property for systems deployed in a monetized environment.