Tribler / tribler

Privacy enhanced BitTorrent client with P2P content discovery
https://www.tribler.org
GNU General Public License v3.0
4.86k stars 450 forks source link

Towards global Consensus on Trust #3357

Open synctext opened 6 years ago

synctext commented 6 years ago

Placeholder issue for master thesis work. End date firm: 16:00, Friday 31 August 2018, Cum Laude potential. Concrete idea:

jghms commented 6 years ago

Reading List (done)

Reading List (todo)

xoriole commented 6 years ago

https://people.csail.mit.edu/nickolai/papers/gilad-algorand-eprint.pdf

devos50 commented 6 years ago

Consensus ranking: http://www.research.ibm.com/cr_aerobatics/cr.html Gossiping trust: https://www.skycoin.net/whitepapers/

jghms commented 6 years ago

https://www.aaai.org/Papers/AAAI/2004/AAAI04-110.pdf Consensus ranking (Vote aggregation) like the IBM page (this is the the paper the example references)

Studies the ranking of m alternatives by n voters, for example m sporters in a sports event which are ranked by n judges. The paper discusses different problems like cycles in the majority graph. As far is I understand the problem that I will be trying to solve, there are some major differences between this problem and mine. First of all in the global trust problem not all nodes in the network are ranked by all nodes. This means each node keeps a ranking of a subset of the alternatives. Also the judges in our problem are not external but rather part of the alternatives, therefore their own trust values should be accounted for.

jghms commented 6 years ago

I have created a Trello board to organize my work, track my progress and build a knowledge base. I keep track of all learnings, questions coming up and todos. In the first sprint (until the next meeting we have on Feburary 1) I will try to understand the problem and come up with a proper problem description.

jghms commented 6 years ago

A Computational Model of Trust and Reputation Pioneering paper from 2001

image

image

Wat is strategy proof.

jghms commented 6 years ago

I created a github for my thesis which I will use instead of the Trello. Have created some projects (sprints) for the next few weeks and you can follow my progress on the Problem description there.

Repo

synctext commented 6 years ago

Please read the pioneering work from Lik Mui at MIT from 2001 onwards

L. Mui, M. Mohtashemi, S. Verma (2004) "A group and reputation model for the emergence of voluntarism in open source development,,"Working Paper. (For citation, please contact lmui). [pdf]
M. Mohtashemi & L. Mui (2003) "Evolution of indirect reciprocity by social information: the role of Trust and reputation in evolution of altruism.","Journal of Theoretic Biology, Vol 223/4, pp. 523-531. [pdf] [word]
L. Mui (2003) Computational Models of Trust and Reputation: Agents, Evolutionary Games, and Social Networks, Ph.D. Dissertation, Massachusetts Institute of Technology. [pdf]
L. Mui, A. Halberstadt & M. Mohtashemi (2003) "Evaluating Reputation in Multi-agents Systems."" in Trust, Reputation, and Security: Theories and Practices, R. Falcone, S. Barber, L. Korba, M. Singh (Eds), Springer-Verlag, Berlin, 2003, pp. 123-137. [pdf] [word]
L. Mui (2002) "Toward Realistic Models for Evolution of Cooperation," MIT LCS Memorandum. [PowerPoint] [postscript] [word]
L. Mui & M. Mohtashemi (2002) "Notions of Reputation in Multi-agent Systems: A Review," AAMAS'2002 and MIT LCS Memorandum.
L. Mui, M. Mohtashemi, A. Halberstadt (2002) "A Computational Model of Trust and Reputation," 35th Hawaii International Conference on System Science (HICSS). (Best Paper Nominee) [postscript] [word]
L. Mui, M. Mohtashemi, C. Ang, P. Szolovits, A. Halberstadt (2001) "Ratings in Distributed Systems: A Bayesian Approach," Workshop on Information Technologies and Systems (WITS'2001). (Best Paper Nominee) [postscript] [word]
L. Mui, M. Mohtashemi, C. Ang, (2001) “A Probabilistic Rating Framework for Pervasive Computing,” MIT Student Oxygen Workshop (SOW'2001). [postscript] [word]
L. Mui, C. Ang (2001) "Collaborative Sanctioning: Applications in Restaurant Recommendation based on Reputation," [poster] Autonomous Agents Conference (Agents'2001).
A. Halberstadt, L. Mui (2001) "Group and Reputation Modeling in Multi-Agent Systems," [poster] NASA Goddard/JPL Workshop on Radical Agent Concepts,  [postscript] [word]
jghms commented 6 years ago

Found an interesting TED talk by Rachel Botsman about the sharing economy.

jghms commented 6 years ago

Progress meeting 1: Towards a problem definition

GOAL: Develop a direction for an exact problem definition for my Master thesis. Subgoal: Get feedback on my knowledge of the field, the goals and the current status of research.

Agenda:

Presentation

jghms commented 6 years ago

Reputation Systems: Facilitating Trust in Internet Interactions

Fun read on Reputation Systems, not much hard science in here but just an introduction, by the same guys as the article "Manipulation-resistant reputation systems" which I am currently studying. They define the minimum features of a reputation system as:

I think what we are trying to achieve is a build a reputation system that is somehow application agnostic. With temporal PageRank we can get the personal view of the reputation of the peers we have interacted with and their peers but what we are missing is a way to acquire a trustworthy reputation value from unrelated and unknown nodes. Also we haven't found a way to distribute the reputation yet, which is one of the basic requirements of a reputation system according to the above definition, which is similar to what is discussed by Mui. Reputation is quite worthless if it is not spread because the chance that one node interacts with the exact same node again in the future is very small in a network with many nodes.

synctext commented 6 years ago

REGRET: Reputation in gregarious societies (2001 paper) The Multi-Agents Systems community has for 20+ years worked on trust. However, these models are pure mathematical. They rarely get confronted with real-world matters in existence for decades, like eBay feedback extortion fraud. Commercial system are central and do not share any of their trust rankings. Open challenge is to connect the commercial and the academic world. At the Tribler team we consistently aim to bridge the gap between theory and reality. For trust models this gap has been wide and deep for 20 years. image Prior work by TUDelft faculty, ETAF: An Extended Trust Antecedents Framework for Trust Prediction. This work nicely decomposes the abstract trust construct into components which can be engineered. etaf__extended_trust_antecedents_framework__nielyork

synctext commented 6 years ago

Single Internet-of-Trust: A single long-lived identities with certain reputation in one domain means you exist, transacted honestly in the past, and invested effort. We now envision a mechanism which re-uses this prior effort in another context. You start from zero, yet grow trust faster because you have something to lose. If you behave dishonestly in this second domain, it will have consequences. Transparency and consequences breed honesty, and honesty breeds trust. This can be expressed with game theory.

jghms commented 6 years ago

Commercial reputation systems: image

Research reputation systems: image

Reputation systems: A survey and taxonomy

synctext commented 6 years ago

"Consensus ranking" is a solid step forward for global trust. Spreading individual non-verifiable trust rankings is realistic to implement and deploy. A superior approach would be to have objective and verifiable rankings which are spread. We might need local aggregation and then global aggregation.

Mechanism to add to increase trust. Each "signed trustranking record" is not a subjective opinion, but upon request the underlying data can be shared, and the calculation can be repeated. Others who repeated this calculation will guarantee correctness with the signature. This moves it from subjective to objective observations with transparency. Each signed trustranking record is also a confession that you have transaction data and by the rules of the game you are required to produce it, or suffer the consequences. All part of "mechanism design" to enforce cooperative behavior.

Please create a fresh repo with all above systems in a clickable table+fresh papers like Google reputation stuff. Thesis existing systems chapter. Rebooting this older Tribler work. | Year | ShortSystemName | Full Paper Title |

Research Question: with partial knowledge of the individual trustranking records. We can only estimate the global consensus rank. Can we mathematically prove a bound on the accuracy of our global estimate, given certain degree of partial knowledge.

ToDo: practical side and do Python Pagerank on real public trustchain data crawl.

jghms commented 6 years ago

First results of the crawl data analysis.

This plot shows the chain length for each of the crawled public keys. chain_length_by_sequence_number

The shown length is the largest sequence number for each public key. If we instead use the number of blocks that have been recorded for each public key, we get a much lower number, so not all blocks are in the crawl data. Actually by summing all blocks that were recorded and summing all sequence numbers we see that only about 45% of all transactions are in the crawl data.

jghms commented 6 years ago

Some considerations on reputation systems and aggregation

According to the above mentioned taxonomy of reputation systems we can define the reputation system for tribler we are building as follows (the six main dimensions for the definition of reputation systems):

Questions derived from the above definition

jghms commented 6 years ago

I created a repository with the above mentioned survey of research and commercial systems. I will update them further with more systems and rank aggregation algorithms.

synctext commented 6 years ago

Impressive progress

jghms commented 6 years ago

[Outcome of discussion] block withholding attack and obtaining full coverage of all transactions are open problem for creating reproducable trustchain rankings bounded random walk take crawl rate (known universe fraction) as a parameter for the error on the aggregated rankings GPU accelerated trust calculations on mobile devices total byte count of crawled dataset to detect bugs and liars find suspiciously large transactions -> build filter -> graphics

jghms commented 6 years ago

Proposal for thesis goal : Verification and aggregation of (partial?) trust rankings in the TrustChain fabric Specific steps:

  1. In order to be able to verify a trustranking of another node, one needs the evidence which was used for the calculation. Sharing one's state of all global chains is probably too much data?!, we could use only the sequence number of all nodes which in our dataset would be 7138 address - integer pairs. Alternatively we could restrict the number of hops used for the random walk which creates the rankings. Or, if exact verification is too hard, we could do here what we would like to do in the next step for the rankings, namely come up with an error function depending on the collected evidence.
  2. Once we are able to verify the rankings of other nodes we can aggregate them towards an estimate of the global ranking: the more work we do to aggregate rankings, the more accurate the estimate will be. We need to chose an aggregation mechanism and estimate the error for a certain amount of rankings aggregated. The complexity of the aggregation result might be dependent on the application: if all we want to know is wether it's safe to interact with a node, we could simply use a binary value as the outcome, or a discrete value (like a star rating). If we actually require a ranking as the aggregation outcome, it will be slightly more complicated.
  3. Finally we should design some experiments to check the real-world potential of this approach to restrict spamming in the Tribler application.
jghms commented 6 years ago

Some more results from the data analysis Basic stats: 7138 nodes (unique public keys requesters + responders) 1328229 blocks 227724 edges in undirected transaction graph (unique public key pairs)

Unfiltered mean transaction size: There are two nodes that have a huge transaction size which is obviously not correct. mean_transaction_size This is the filtered version, maximum value is 999. mean_transaction_size Direct neighbors per node: edges_per_vertex

synctext commented 6 years ago

We crawl blocks from other nodes, we have not observed all those transactions but have to trust that nodes send proper data.

Research idea to protect from the block-withholding attack in general (for our TrustChain context). All direct connected peers need to sign that they: copied, verified, and actively disseminate you entire chain upto and including block X. "Block-verification statement". Nodes for which no peers provide signed Block-verification are given a low reputation. {Obviously Strategy-Proof Mechanisms for IPv8, @qstokkink }

Convert to honesty At micro-economics level the majority of (rational) players may be dishonest. A minority of always-honest players will follow the given rules. These dishonest players do not have many opportunities to interact, honest players do not trust them. Honest players have numerous opportunities to interact and are thriving, as shown by their TrustChain interaction history. The next step is to moving from local to global and achieve the desired emergent macro-economic effect: rational players convert to honesty. There is some forgiveness in the strategy of honest agents and if dishonest players convert, they will receive a large payout in the reward function. They might elect to change their ways for a single round or longer. As shown by recent research these ideas might be applied to our real economy, dubbed "Industrial Sybiotic Relations". The holy grail is to prove the stability of cooperation against attacks such as invasion defectors.

Conclusion: we need to derive a strategy-proof mechanism for dissemination of interaction-outcomes, (e.g. build a tamper-proof collective memory) and design policies for local interactions from which at the global levels favorable conditions are created with evolutionary stability.

Now turning to scalability and trust-rank. For planetary-scale cooperation each actor is provided with bounded rationality, limited storage capacity, and a limited population it can observe. We define a unobserved set V of all n nodes

Pairwise auditing Alice requests a pairwise auditing from Bob. They exchange their observed interaction records. It is determined if their stored interaction graph is equal of who will have a strictly larger subset of all global interactions. We assume Bob is the party with the smallest determined subset, he calculates the Trust-rankings, cryptographically signs them and forwards them to Alice. Aim is to prove the error bound for Alice to check such audits. Likewise Bob will also sign Alice her audit certificate and Trust-rankings. Combine with strategy-proof dissemination mechanism: after a signed audit you MUST make all interactions which underlie each Trust-rank available to others. A secondary effect of pairwise auditing is that Alice and Bob synchronise their records (e.g. 1987 stuff). After the merger they will store and disseminate the longest chain of everybody.

Once we are able to verify the rankings of other nodes we can aggregate them towards an estimate of the global ranking

We assume audited top-N trust-rank for each actor in the system. This then become a homework assignment, as their are several known solutions :-)

Leveraging Libtorrent Both Alice and Bob use Libtorrent to seed their collection of Trustchain records. try to use this trick to seed and exchange records. Maximize the overlap by sorting the single binary blob on public key and other efficiency measures you can think of.

You are hereby release of any need to integrate your work into Tribler, if and only if you can do some publishable theoretical proof with as described above and do a Pim-like; Kelong-like stand-alone validation experiment with real crawled Trustchain data...

jghms commented 6 years ago

Interaction diagrams for pairwise auditing and how they can be used for transactions. pairwise auditing transaction protocol

jghms commented 6 years ago

Evolution of cooperation in the context of Tribler In Tribler we have the prisoner’s dilemma when faced with the uploading and downloading of data. If I upload (cooperate) and you upload (cooperate) we both gain data for some cost of the uploading. If I only download (defect) I have no cost and still benefit if you upload (cooperate). One round would be one interaction between two nodes with some amount of data. Usually though we have multiple interactions, either between the same nodes or between different nodes, thus we are playing the iterated prisoner’s dilemma. According to Nowak there are 5 mechanisms for the evolution of cooperation: direct reciprocity, indirect reciprocity, spatial selection, group selection and kin selection. Direct reciprocity is when we both upload data to each other, however due to the asymmetry of data in file sharing this is not always straightforward. I can only upload data if you would like to download that data. This is where indirect reciprocity comes in: I upload data to you, you upload data to someone else and I hope that someone will also upload data to me.

Indirect reciprocity: “I help you, somebody helps me.”

Now the other forms are somewhat harder to map to the file-sharing example: spatial selection refers to the mechanism of neighborhood help. We rather help people in our vicinity than far away. Group selection is a similar phenomena, except it’s not related to space but to affinity to a group. Finally kin is a similar concept related to family connections. The goal is to create a Tribler society in which cooperation is an evolutionary stable strategy. Indirect reciprocity is then the best suited mechanism to foster cooperation. In contrast to direct reciprocity a node is not rewarded with cooperation in return but rather with an increase in reputation. Given some form of dissemination of the reputation other nodes will see the reputation of the node and be more willing to cooperate if the reputation has some level.

What happens to nodes that have a low reputation? How does reputation help us get benefit in the application? What is the incentive to gossip the reputation of other nodes? What happens if we do not tell the reputation of a node we know?

jghms commented 6 years ago

[Continued] Evolution of cooperation in the context of Tribler There should be some incentive for nodes to gain more reputation. In a market place a seller might want to attain higher reputation in order to have more customers buy from her rather than from competitors in the market. In Tribler this does not work the same way, more people downloading is not an advantage. However we could enforce that nodes only upload to nodes which have a higher reputation. This means that a node with a high reputation can obtain data from many nodes in the network while nodes with little reputation only can interact with few people. This might be difficult though in practice because nodes will start with little reputation and might not have access to all files before they upload some data, but uploading is only possible if other nodes are interested in the data a node can provide. But let's assume that the incentive for gaining reputation is more data (privileges) available. Now, there seem to be multiple notions of reputation: on the one hand we have the (global) interaction reputation, which in the case of Tribler is the ratio between uploaded and downloaded data and on the other hand we have the (local) aggregated rankings of multiple neighboring nodes. In order to properly utilize the reputation and have good incentives for obtaining reputation we need to find a way to combine the two notions of reputation toward one trust value that we can assign the node. Given that we have an incentive to increase the reputation and there is a way to combine the two different reputations, a node will try to improve its reputation. For the global reputation, reputation is improved by uploading more data than downloading. For the local reputation, this depends on the aggregation mechanism, but in a simple example the real-number scores a node gets from all signed trust rankings of the neighbors, can be summed to one number. Given that the scores are positive values between 0 and 1 the reputation increases with each ranking a node is listed in. Rankings are obtained through pairwise auditing and a node has a strong incentive to do so in order to increase it's local reputation. The other nodes have the same incentive to respond to the pairwise auditing request. Next to just summing the scores, a different aggregation might use the rank of a node in the rankings provided by neighbors. This requires more analysis but a problem with this approach might be that a node can increase its reputation by lowering that of nodes in the ranking above itself, leading to an incentive to be untrustworthy.

Sub question to answer next: Is a restriction of interaction to nodes with higher reputation a valid option for an incentive to increase reputation? Is a weighted sum a valid option for combining the global and local reputation? (And can we find a better name than global and local reputation?) Is there an incentive not the share trust rankings? Do we need to reward nodes for this? What happens when a node withholds blocks/double spends? Do we constantly need to check for the validity of their chains after we had an interaction? If we constantly check the validity we should be able to detect a double spend and then inform other auditor's such that they can remove their rankings, practically removing the reputation of the misbehaving node.

jghms commented 6 years ago

Very interesting study on reputation in game theory application on online settings, with a conclusion with arguments on whether or not it is practical to study reputation system from a game-theoretical point of view.

Apart from the mentioned complications that the game-theoretic modeling of reputation normally involves, there are also some very practical arguments against it. Thus, there is a concern about how well the described models approximate the real-world settings. For instance, is the model with one long-run player and many one shot opponents a good approximation of a typical eBay scenario in which one seller sells items to many buyers or this interaction cannot be considered in isolation from other interactions of the seller and the buyers? Some other concerns deal with the discounting criterion and the exact values of the players’ discount factors. In the case we need them, how can we discover the discount factors or what happens if two or more (long-run) players have different factors. These practical issues are not usually studied in the literature. Finally, there is a huge body of work proving that humans do not normally act as rational economic agents (Fehr and G¨achter (2002), for example) raising the question of the practical usability of the game-theoretic modeling in general. More specifically, Bolton, Katok, and Ockenfels (2002) report on some peculiarities in how people treat reputation systems which can be again explained by their irrationality. In P2P networks the hard problem we just described becomes even considerably harder. An implicit assumption of the above discussion was that the feedback aggregation was done by a central authority (eBay, for example) that did not have any incentive to distort it in any other way. Unfortunately, this assumption is not valid in P2P networks where the aggregation can be performed only by the peers themselves. But then the peers may find it profitable to distort the feedback or simply to reenter the network under a different identifier and abuse their knowledge of the complete feedback in the interactions it covers. To the best of our knowledge, there is no game-theoretic model that includes this possibility.

The last sentences show that the peer-2-peer reputation system is quite difficult to model, and that in 2004 there was quite probably no model on such a system existed which also considered the possibility of malicious nodes that try to cheat.

jghms commented 6 years ago

Upload vs download analysis of nodes in the crawl dataset on exponential scale (Visualization by @devos50) freeriders

jghms commented 6 years ago

Is a restriction of interaction to nodes with higher reputation a valid option for an incentive to increase reputation?

In the BarterCast paper two different policies are mentioned: the ban policy, if a node falls below a certain threshold we block it, and the rank policy, nodes are assigned bittorrent slots from peers in order of their reputation.

And can we find a better name than global and local reputation?

BarterCast uses net contribution for the difference between up and download and calls the aggregation of all reputation values that a node gets from its peers the system reputation of a node.

Is there an incentive not to accept the audition?

This depends on how we aggregate rankings or score. If we average the scores or ranks given by all the peers, then if a node is given a lower rank than it's current average rank it would beter not accept the audition. On the other hand if we just add up scores, any positive score would increase the reputation, so there is no incentive not to accept the audition.

synctext commented 6 years ago

Indirect reciprocity: “I help you, somebody helps me.” Indirect reciprocity is then the best suited mechanism to foster cooperation. In contrast to direct reciprocity a node is not rewarded with cooperation in return but rather with an increase in reputation.

Yes. Please focus fully on facilitating indirect reciprocity and crafting a core building block for our long-term aim of devising a universal mechanism to create trust. Reputation is then a risk-estimator which aims to accurately represents the probability of defection. We can model agents falling for short-term opportunistic behavior or calculative following long-term interest plus a new Computer Science things: fraudulently creating numerous cheap identities to boost reputation (Sybil). This view has solid Nobel-level theoretical foundations. Nobel prize winning framework for trust and risk taking within economic transactions: transactions cost economics.

jghms commented 6 years ago

Strategyproofness of voting rules Lecture notes about the strategyproofness of voting rules from course at CMU.

Set of voters N = {1, . . . , n} • Set of alternatives A, |A| = m • Each voter has a ranking over the alternatives • x i y means that voter i prefers x to y • A preference profile is a collection of all voters’ rankings • A voting rule is a function from preference profiles to alternatives

A voting rule is dictatorial if there is a voter who always gets his preferred alternative, regardless of the other voters’ stated preferences. A voting rule is onto if any alternative can win, for some set of stated preferences. Theorem 1 (Gibbard-Sattherwaite): If m ≥ 3 then any voting rule that is strategyproof and onto is dictatorial

This means that for our problem any vote aggregation that uses the ranks of nodes cannot be strategy-proof because in our case we are voting on all the nodes known to the voter, which are more than three, and we don't want to have a dictator.

jghms commented 6 years ago

Mathematical framework Okay I did not get quite as far as I hoped to but I wrote up a short summary and some mathematical definitions. I took some of it from Pim Otte's thesis.

Also I found Chapter 27.5 of this book is. It explains that standard PageRank is not rank-strategyproof (a node can withhold edges from the graph in order to lower the rank of higher-ranked nodes, thereby increasing it's own rank). I believe that the same will be true for Temporal PageRank.

One thing we can investigate is that by requiring nodes to disseminate a ranking with the evidence that was used to calculate it, can we proof that either you provide the true ranking and true data, or you will be exposed as a fraud? This is not actually strategyproof, but in practice it will lead to trustworthy behavior, because any kind of cheating will be discovered and lead to banishment.

jghms commented 6 years ago

The effect of communication on indirect reciprocity In a network with thousands or even millions of nodes we are (almost) fully reliant on indirect reciprocity to foster cooperation. In game theory, this is referred to as a random matching game (with large number of players)Chapter 5.1 . While in direct reciprocity, both nodes know the full history of their interactions and therefore know exactly the reputation of their opposite, with indirect reciprocity we are dependent on the spreading of the reputation of nodes. This "spreading of reputation" is also referred to as "information processing", "observability" and "monitoring" (at least I think those are equivalent concepts) and has been researched as early as 1992. In the case of perfect communication we have a public history of all players. In that case, the fairest player will be rewarded most, that is, in a peer-2-peer network, the node with highest upload/download ratio will be the one, all nodes will upload to (in Tribler the situation is more complicated because of relaying). In the other extreme case of no communication (private history only), it is quite probable that most nodes will defect because the chance of retaliation for defection is small, just like the chance of reciprocity in the case of donation. In a real-life scenario, an agent will not have all information but through gossiping/crawling there will be a lot of communication.

If our goal is to facilitate indirect reciprocity, there should be a high level of communication, that is each node should have as much information as possible. With that information and some reputation(accounting) mechanism which is manipulation-proof and fair, we can provide stable cooperation in the network.

Communication can be guaranteed by giving an incentive to communicate, that is a node gains reputation through communicating, which makes it a dominant strategy.

More communication leads to higher fairness in the system. How does the fairness of the overall system affect the stability of cooperation in the system? How fair is Temporal PageRank and can we increase the fairness by obtaining more data or aggregating rankings? What is fair behavior in the context of Tribler and reputation systems in general?

synctext commented 6 years ago

Advogato dataset of trust analysis with nice graphs to compare to and 11 million Orkut users Faculty member Joost Broekens from TUDelft analysis of trust from 2009. Lovely book on repeated games and reputations

jghms commented 6 years ago

Cooperation among strangers with limited information about reputation has a nice introduction.

Dynamics of indirect reciprocity

In the paper "The Dynamics of Indirect Reciprocity"(1998) Nowak and Sigmund analyze the composition of a population of cooperators, defectors and discriminators in a variety of simulations of iterated donation games. In one experiment they analyze how cooperation evolves when the history of the other player is only available with a certain probability. Many other papers are available on the evolution of cooperation in different simulation settings. The introduction of this review mentions a lot of the literature in the field. A lot of papers focus on the evolution of cooperation in different network topologies.

Our ultimate goal is having a universal system to create trust between relative strangers. We have shown that a ranking algorithm like NetFlow or Temporal PageRank are to some amount sybil-resistant and our hypothesis is that a dissemination mechanism like the previously mentioned Pairwise auditing increases indirect reciprocity. A reputation system should enable indirect reciprocity and lead to cooperation. Somehow we need to measure if the dissemination actually increases cooperation.

Therefore I suggest to setup a simulation in which players (nodes) play a donation game like Nowak&Sigmund describe: an asymmetric version of the iterated donation game in which one agent is chosen as mover(the other recipient) and decides to either give or keep a donation. In such a simulation we can model the information storage and communication like in the real case of our proposed decentralized reputation system (multichain, Temporal PageRank/Netflow, Pairwise Auditing). Also we can use the dataset that we have crawled before to simulate the distribution of interactions between nodes. By running different experiments in this setting, e.g. with/without pairwise auditing, we can determine how cooperation in the overall network is effected, what strategies are evolutionary stable and can validate a whole set of hypotheses.

synctext commented 6 years ago

Foundations of Trust Design Theory:

The above model is meticulously designed to scale to a large-scale digital word-of-mouth network encompassing billions of human agents. Model specification at a more detailed level:

0) Agents have interactions 1) Transaction is a public record of one interaction 2) Transaction can be described as an arbitrary string 3) Transactions between agents form a transaction graph 4) One transaction forms a single TxBlock 5) A TxBlock becomes irrefutable with the signatures of involved agents 6) The contents of a TxBlock may be private of public 7) An append-only data structure is used to record TxBlocks 8) Each agents publicly shares their own TxBlocks and that of others

Please do not try to solve the trust or indirect reciprocity problem within the context of a single master thesis. The above is suitable for an entire lab to focus on! By design:-) Suggested focus: pair-wise audits are provide proven resilient to manipulation under certain assumptions + running code part. Possible storyline: our protocol reduces the honesty problem and manipulations of records to the simpler problem of selecting honest witnesses from a population of agents. Incentive alignment: without witnesses you are always seen as a stranger. This protocol entangles information dissemination and integrity guarantees. Performance bounds: randomly selecting a certain amount of witnesses from the overall population provides a bounded accuracy. Concrete topics such as self-promotion attack by Alice, only using TrustChain records which make that agent look awesome.

jghms commented 6 years ago

What are the flaws of the current system that we can solve with pairwise auditing?

Situation without pairwise auditing

There is no guarantee after how many interactions a fork in agent A's chain (double spend) can be detected. Some people who have the data will detect it and deny agent A all interactions. We don't need to accuse agent A of double spending if data is widely spread and every agent checks for double spending attacks before interacting. Simple block-withholding is easy to detect by checking the completeness of the chain. Agent B decides on a single trust ranking that he calculates. Even if agent A has many interactions with Sybils, agent B's ranking of A should not increase too much because of the Sybil-resistance of the ranking algorithm. Agent B can choose to include interactions in the calculation of the ranking or not and can therefore lie about the true rank of A in B's eyes, thus not giving agent A data that A would actually deserve.

Situation with pairwise auditing

Thus, Pairwise auditing makes it possible for two agents to agree on their respective rankings of each other. This can either be used to combine multiple rankings: if agent A stores all data collected through pairwise auditions in the past, A can prove to new nodes that other nodes ranked A in the past according to the state of A's private data set. Or we can use pairwise auditing before each interaction in order to proof that agent B acts fairly by keeping or giving resources.

Combining rankings does not necessarily make sense: only if the accuracy of the ranking increases strictly with combining more rankings. Therefore the added benefit through combining rankings depends on the type of ranking used.

As for the double-spending attack: The only agent that knows for sure if agent A is double spending is the last node that interacted with agent A. Let's say we require each transaction to be signed by a third agent which is the agent the last interacted with agent A. If the node is asked twice, it will mean that an agent forked the chain. However this does not work when the signing agent is offline.

jghms commented 6 years ago

Full network crawl plot. All nodes and interactions. bokeh_plot

In this plot it's less obvious but there are some interesting Sybil-like regions, in which something like 30-50 nodes all have the one, same node as only interaction partner. Don't know why they exist yet.

jghms commented 6 years ago

Problem description

The manipulation resistance of our multi-chain architecture depends on the availability of data. If it would be possible that every agent has all transaction data of all other agents, every agent could identify double-spenders, could identify sybil regions and could allocate resource in a perfectly fair way. Unfortunately, such a situation is not possible in a distributed system. Until now, we expect agents to crawl data but without any personal gain. The amount of work it takes to crawl and respond to crawl requests is not balanced by any reward, therefore crawling as a strategy is game theoretically speaking dominated by not crawling. If agents therefore decide to not crawl, they don't loose utility but use less of their own resources. But on a network scale, less crawling leads to lower probability of attacks being detected. We conclude that we need to design a mechanism to make sharing of transaction data strategy-proof, that is, no advantage can be gained from not sharing data.

Research question

How to design a trust mechanism that makes sharing of transaction data a (weakly-)dominant strategy?

Solution -> Pairwise auditing

We envision an extension of the current Trustchain in which agents are by default considered untrustworthy. In order to be considered trustworthy, agents need to engage in auditing sessions. Auditing sessions happen in both directions, so both agents are auditor of the other, and subject to the other's audit. The outcome of an audit is a trust endorsement by the auditor. The value of the endorsement describes the probability that the subject has a valid chain, valid interaction partners and is a "good" citizen, as seen by the auditor. For example, the value could be calculated as the connectivity of the graph of the subject with that of the auditor. By providing more data (proof) to the auditor the subject can increase that connectivity. If the subject is performing a sybil attack, there will be very little connectivity because most of the sybils have no interactions outside the sybil region. Providing incomplete data of the total chain will lead to a 0 score. The endorsements can then be used when applying the accounting policy. We only trust agents that have at least one endorsement. The amount of trust is dependent on the number and value of endorsements. Also we can weigh the endorsements by the trust we have in the auditors, to prevent sybil endorsements. At some point agents have to balance their storage requirements for the added trust they can achieve.

Experiment design

For our experiments we want to utilize real crawl data. We create the network of agents with their respective data. In this network we can perform some rounds of pairwise auditing. Then we can create different scenarios: a sybil region, an agent with valid history etc. and perform audits from different nodes of the real network. We should show that the real agent gains a good score during audits while the Sybil region nodes should perform badly until a certain amount of edges to the real network.

jghms commented 6 years ago

With the experiments we need to prove the desirable properties of the mechanism:

synctext commented 6 years ago

List, discuss, and evaluate using various aspects to audit. Replaced witness with endorsements(real improvement). Goal: ability to provide strong game theoretic results. Resilience to Internet trolls? (e.g. Sybils) Explore design space as a system architect, aka mechanism design. What should we audit (or allow all types...; generic audit infrastructure)?

"We only trust agents that have at least one endorsement."

reformulate. No endorsements == no trust.

Are you selling an algorithm or mechanism? ToDo: We prove that our blockchain dissemination mechanism is tamper-resilient, as it can detect all lying, cheating, and strategic manipulation in general. Do experiments prove something or you require mathematical model with assumptions to prove claims?

Desired property monotonic increase of score/karma/closeness with increased information sharing. However, missing blocks from the agent itself result in an immediate zero-score and missing blocks from direct interaction partners may have a disproportionate impact. Algorithmic enforcement of the duty to audit your direct interaction partners. We introduce the contagion property to enforce a responsibility to validate others, similar to KYC check of banks. No forgiveness for sloppiness. They fail audits, you fail! For instance, no 2nd interaction without a personal audit.

numerous zero-scores, could be slander attack by Sybil nodes or real nodes colluding and being evil.

Status:

btw 15 April @vandenheuvel deadline. Delft paid trip to Canada..

synctext commented 6 years ago

the audit your partners rule: a key mechanism is to require audits, ensuring we create an invisible hand of honesty. The rule: repeated interactions with an agent requires you to be able to produce your cryptographically signed, valid, correct, and honest audit certificate for this agent.

After an initial interaction you need to start checking people, and either put positive or a negative endorsement online. Negative endorsements can be simply caused by failure to send information or non-responsiveness. Failure to produce an audit certificate for any agent you interacted with repeatedly will severely impact your trust level.

synctext commented 6 years ago

Obtaining honest ratings after transactions has proven to be a hard problem.

the paper "Rating inflation" reputation_inflation

A solution to marketplace information asymmetries is to have trading partners publicly rate each other post-transaction. Many have shown these ratings are effective; we show that their effectiveness deteriorates over time. The problem is that ratings are prone to inflation, with raters feeling pressure to leave “above average” ratings, which in turn pushes the average higher. This pressure stems from raters’ desire to not harm the rated seller. As the potential to harm is what makes ratings effective, reputation systems, as currently designed, sow the seeds of their own irrelevance.

jghms commented 6 years ago

Thanks for the suggestions!

I think I was misled when thinking of the monotonic increasing value in order to make sharing strategy-proof. Instead, if pairwise audits are the only way of exchanging data and each endorsement block records the blocks exchanged, then we can simply see exactly which data an agent has at any point in time. So, when we require the full-chain, we not only see the agents interaction records but also the record-exchange records, so we know which data the agent possesses. So when we request data we can check if an agent shared all data.

Experiments The question of a key metric of success is a hard one and I have been struggling to think of experiments that show that our dissemination works. The dissemination strategy itself does not protect against the Sybil attack or the double-spending. However, good dissemination of data increases the probability of finding such attacks with existing methods. In order to facilitate good dissemination we give agents an incentive to obtain endorsements and thereby exchanging their data. So, our accounting policy becomes a function of contributions and endorsements. Now, only contributing is not enough but agents also need to get endorsements as well. We still have to make sure that simply through creating endorsements by Sybils the overall trust does not increase beyond a certain threshold. This Sybil resistance can be achieved through algorithms analyzing the graph of contributions (e.g. netflow, temporal pagerank), but also the graph of endorsements (graph of contributions should be a subset if we apply the audit-your-partners policy). What we need is another element in the accounting policy which is the probability of an agent being honest. Then, in order to obtain a higher score, an agent needs to contribute and share it's data. And for honest nodes this will also increase the probability of being honest. But for agents that are part of a Sybil region, sharing more data will at some point decrease the probability of being honest because the agent reveals more and more of the Sybil region. What remains is to find a good function for the probability of being honest, as temporal pagerank is not giving us such a value (maybe claimed contribution divided by temporal pagerank could work?). Anyways, for now I will focus on writing some framework code and run some preparing experiments:

If all these experiments work I have to think about how to achieve the Sybil resistance.

As for the audit-your-partners rule: at some point in the future we will need to make sure that interaction partners can comprehend why the other is not sharing data or sharing some specific amount. Basically, not sharing data can either be classic "defection" or it can be "cooperating" by not giving resources to people that don't deserve it. This comprehension is only possible if the requester can make the same calculation as the responder and check that the responder actually calculates with the given data that the requester deserves some amount of resources. Therefore I would suggest to perform the audit before a transaction. This gives the requester an opportunity to prove her trustworthiness and makes sure both parties agree on the bandwidth and amount of resources to share.

synctext commented 6 years ago

Trust research spans The Sciences; economists, iterative PD (CS), evolution of cooperation(biology). Social scientists from Stanford Innovation Lab are discussion mechanism design. Talk: "Harnessing Gaming Technology for Peace: A Conversation with Margarita Quihuis", discussing a social operating system based on religion, legal code, and mechanism design in general. The lab founder 1999 paper uses credibility, instead of trust. His 2002 paper with 4000+ citations "Persuasive technology: using computers to change what we think and do":

Given the importance of credibility in computing products, the research on computer credibility is
relatively small. To enhance knowledge about computers and credibility, we define key terms
relating to computer credibility, synthesize the literature in this domain, and propose three new
conceptual frameworks for better understanding the elements of computer credibility. To promote
further research, we then offer two perspectives on what computer users evaluate when assessing
credibility. We conclude by presenting a set of credibility-related terms that can serve in
future research and evaluation endeavors.

Thesis framing: This thesis provides elegant, simple, fast and secure algorithm for data certainty.

Now, only contributing is not enough but agents also need to get endorsements as well.

@jangerritharms took the idea to the next level: using it as the exclusive dissemination mechanism. By forcing peers to publicly disclose what they know about all others it is trivial to detect forks, fakes, and fraud. Endorsements are on-chain. This is the key strategy of our lab goal of "Distributed Trust Design", change the rules of the game. Take this to the extreme and make it asymmetric, almost unfair, and virtually impossible to attack. Contributing is not sufficient, only count contributions endorsed by the community-at-large. Reciprocity as endorsement incentive.

ToDo:

qstokkink commented 6 years ago

Beyond game theory: cryptographically enforced two-way endorsements.

jghms commented 6 years ago

https://github.com/jangerritharms/aupair

jghms commented 6 years ago

Status update:

I have created a setup for experiments, with two different exchange mechanisms: crawling and pairwise endorsement.

In order to show that the code works in general I have run two tests with 50 nodes, in which each node requests one transaction per second with a random partner for 100 seconds.

Crawling crawl_50

Endorsement (pairwise record synchronization) sync_50

Foreign blocks are half-blocks from other agents, received through the dissemination. Exchange blocks are the blocks created in my version of the pairwise endorsements. Unfortunately my computer does not have enough power to run this experiment fast enough for the pairwise endorsements (probably inefficient code). Actually the information is quite well distributed if the crawl rate and transaction rate is equal and each crawl gets the full chain.

The first actual experiment I want to run is to detect double spenders. So have one node perform a double spend and see how long it takes until a double spend is detected on the network. In theory pairwise endorsements should be superior to crawling because with crawling only the nodes chain is shared while with pairwise endorsements also all information of other nodes is shared, leading to faster spreading of information (but also more information to store for each node).

Can we calculate how much unsafer trustchain is vs bitcoin? Professor Epema asked this interesting question. Maybe we can reformulate it a little bit: In bitcoin double spending is basically prevented, but how long does it take in trustchain to detect a double spending? If we perform crawling between nodes I believe with some simplifying assumptions we can calculate this, however I am unsure about whether the assumptions are reasonable so maybe this consideration is useless.

Assumptions:

Let's say a double spend has been performed and each node crawls at a certain rate. Let's ignore the attacking node for the moment, then there are two nodes n_0 and n_1, which have the conflicting transactions and all other n-2 nodes don't have any of those.

1 node is crawling: This node will detect the double spending if it has crawled both n_0 and n_1. This problem is similar to throwing a 1 and a 6 with a normal dice (n=6). The probability for the first success (either 1 or 6) is 2/6 and the second probability is 1/6 (1 if first success was 6, and 6 otherwise). The T is the number of throws it took to get both a 1 and a 6 the expected value for T is E(T)= 1/(2/6) + 1/(1/6) = 3/2 6 = 9. So in the case of crawling it takes approximately 3/2n crawls for one node to detect the fraud.

All nodes are crawling: Now once any node find the fraudulent transaction he/she will inform the whole network. So the actual question is how long does it take for any node to detect the double spend. ...

A different approach is that nodes are not crawling the same node twice. The the probability after m rounds of crawling and n nodes on the network is approximately (m/n)^2. The probability that any node finds the double spend within m crawls is 1-(1-(m/n)^2)^n. For example with 1000 nodes on the network and each node making 10 requests we get 1-(1-(10/1000)^2)^1000 = 0.0952.

synctext commented 6 years ago

reformulating...

Endorsements enable audits of auditors! Endorsements are on-chain. The topic of double spending detection time distracts from the advantage of this algorithm. Reading: reputation mechanisms

All honest nodes have an incentive to detect fraudulent behavior, are they can become a victim. Transactions are disseminated (gossip or crawling) and each node will report a double spend to others. Assumption 1: a double spend transaction without check is indistinguishable from other transactions. Assumption 2: a malicious agent can not influence the dissemination of his transactions by honest agents (sabotage spreading assumption). Only by detecting for any chain two different transaction with the same sequence number you detect malicious behavior. Thus intuitively formulated, all honest nodes collaborate and scan all disseminated transactions in parallel for double spends, resulting in fast detection.

Collusion scenario: nodes can collude and agree to conduct double spending attacks and never disseminate these transactions (no trivial on how to exploit this).

Is this a variant of the coupon collection problem?

With the introduction of locality: a bias towards dissemination and endorsement means disconnecting from the network size. it scales! It becomes constant, instead of dependent on n, it becomes dependent on the average of interaction partners. (No?) progress with random interactions, but for fast mixing graphs it's progress?

experiment brainstorm:

jghms commented 6 years ago

Lately I felt like I was bothering too much with details and lost the overview of what we were trying to achieve. That's why I went back to the very first entry and traced back by steps of how I got to the latest considerations. I wrote it down and it ended up as some kind of a summary of the theoretical considerations of the last months. I hope it's clear and we can find any logic faults in the argumentation, so feedback is much appreciated.

How did we get here?

We started out with the goal to work towards a consensus on trust for the TrustChain architecture. We considered different concepts of trust, analyzed different reputation systems and researched ways of building a sybil-resistant network. The goal was to combine multiple local rankings of trustworthiness of agents to better approximate a global ranking of trustworthiness. But in order to combine rankings we should first check that the other agent is not lying about the ranking, so we need a verification step. The current architecture of TrustChain does not allow a simple verification because rankings are created based on the complete database instead of the agents personal chain. In other words, if we see the trust ranking as a pure function of the agent’s state (i.e. the same state of the agent returns the same ranking) TrustChain does not record the complete agent’s state because the state consists of the agent’s transactions and the agent’s knowledge of the network. Agents need to agree on a single view of the network in order to agree on each other’s or a combined ranking. But without any “proof-of-knowledge” agents can decide to not share records of other agents, because any reputation those other agents obtain in the eyes of the calculating agent through those records can put those agents above the agent that is sending the information. Hence it is not advantageous to share all information.

The solution is to make truthful sharing of records incentive compatible. We can achieve this by recording not only transactions on the personal chain but any change to the record database. We need to create a new block type (exchange block) which includes all records sent to and received from any other agent. If this structure is in place the following will be possible: given the complete chain of an agent, we have access to the same knowledge as the that agent. However, storing all transactions of one agent in an exchange block will render that block to be huge. Instead, we can continue to store the actual blocks in an undisclosed database and only record a block exchange index in the exchange block. What we require is: given the complete chain of an agent, we should be able to create a block index which is an exact representation of the blocks in that agent’s database. The block index of an agent lists for that agent all known public keys and each of those keys the interval of all known sequence numbers.

With the above extension to the TrustChain architecture we are able to enforce truthful sharing of an agent’s complete knowledge (agent’s view of the network/agent’s database). Now, the original goal of combining trust rankings becomes theoretically possible. Because the agent’s complete knowledge, or state, is on the chain, we are able to verify calculated trust rankings at any time. Now we can envision different mechanisms. For example an agent periodically stores a ranking on-chain. Another agent can at any point in time, for example after finding the ranking during a data crawl, obtain the full chain of that agent until the point of the ranking, then request all blocks that are not already in the verifying agent’s database and once received check the ranking. Once checked, the ranking can be combined to a composite ranking.

Pairwise Record Exchange and Trustworthiness Endorsement

Instead of the above mechanism we can also consider a stronger mechanism: Pairwise Record Exchange and Trustworthiness Endorsement (in the following referred to PRETEND, maybe we could call it PRoTEcT - Pairwise Record Tender and Endorsement of Trustworthiness, the c in proteCt is missing but who cares, sounds cool and fits the context 😛). Before any transaction, two agents need to agree on the state of the network. For that both exchange their full-chain and from the difference in block index, calculate and exchange any missing blocks (from their own chains, or chains they know about of other agents), such that both obtain the exact same block database. Also both verify that the new state is still valid (no double spending, not missing blocks in chains) and that both indeed shared all data. If both agree on the new database, they create a new exchange block and both sign it. After this the transaction can be conducted as normal. Instead of explicitly calculating and combining trust rankings we only combine the data which is the input to the ranking function. Which reputation function is applied, or wether a function is applied at all (other applications than reputation systems are possible) is not the concern of this mechanism. This way, all transactions appear only once in each ranking calculated. When combining multiple rankings otherwise, it could become difficult to combine rankings properly with weighting transactions multiple times.

Verifying application specific behavior. Depending on the application, the endorsement can entail more application specific verifications as well. For example, in the context of file-sharing, we might want to enforce that an agent does not upload to agent that already have a negative balance of more than 1GB.

Verifying endorsements. A first obvious objection against this mechanism is that agents can decide to falsely verify each other’s chain, even though they are actually corrupt. However this mechanism and the addition to the TrustChain fabric of storing the database index not only allows to check for the correctness of the chain but also allows to verify the correctness of all endorsements themselves.

Overhead. The PRETEND mechanism introduces a lot of overhead. First, with the above described mechanisms, we can only conduct transactions if we have the complete data of another agent. This can create high requirements for the storage capacity of agents if no mechanism is added for removing data from the database. Also the verification of all data of another agent can create constraints on the possible throughput if agents run the verification on devices with little computing power.

Locality. This considerations leads to the possible introduction of locality. Agents would be more willing to interact with agents that have a similar state because they need to perform less work during the record exchange. This might help us in the prevention of Sybil-attacks because locality cannot be faked and agents in the vicinity are finite.

Bootstrapping. Another implication is that new agents need to obtain a lot of data when joining the network in order to interact with incumbent agents.

Mathematical model

In order to give this work some theoretical depth we need to define a model to show how our approach differs from existing solutions. While at first I mostly looked at reputation systems, the mechanism described above is not necessarily concerned with reputation, only if an actual reputation function is applied. Also, the model from Seuken. which was used in the thesis of Pim Otte is not one-to-one applicable to this situation because we don’t necessarily need the function describing the value of the transaction, many values or objects could be connected to each transaction. Rather, this work is concerned with decentralized storage, dissemination of data and the methods to verify the correctness and completeness of that data. As such, our system is in the same ballpark with other cryptocurrencies or decentralized databases. I could not find a proper description of those type of systems so I tried to come up with my own definition, of which I am not sure if it is proper.

Decentralized transaction record management mechanism (DTRMM). A DTRMM orders, validates and distributes records of transactions between agents in a transaction network without any central controlling entity. It’s tasks can be defined as:

Somehow we should be able to fit TrustChain, Bitcoin and other cryptocurrencies into this definition and compare them, and possibly other systems which are not blockchain based. Maybe next to the mechanism we should define Decentralized transaction record management system which is the actual implementation of such the mechanism for a certain network. The system should also handle the actual storage of the data on each agent. Now, within such a system we can define the state of the agent. In the model by Seuken et al. the subjective work graph is what is the input to the accounting mechanism. Most of their model is based on the graph representation. However, when not considering graph based methods, it is possibly more applicable to talk about a state.

Agent state. Similar to the subjective work graph the agent state is simply the agent’s subjective view of the network given any knowledge the agent has obtained so far. So for an agent p, the agent’s state is simply the agent’s specific subset of S_p = , where I_p is a subset of I, the set of all transactions in the network.

Now, we can define the observability property, which means that the agents state can be observed or not. In TrustChain only a small subset of the state can be observed, which is the personal transactions of the agent. With our new addition which was defined above the state of the agent can be observed completely. To compare, blockchains with global consensus like bitcoin don’t record each single agent’s state, but instead assume that agent’s have almost exactly the same state, therefore up to some certainty the state is inferable.

To be continued …

Experiments

Above we have claimed some desirable properties of our mechanism, which we need to show in experiments. The mechanism should offer good tools for verification of data which prevents agents from manipulating or withholding data. If any manipulation or withholding data is found, or the an agent did not properly behaved given application specific rules, the agent should be ignored by other agents. A basic setup of multiple experiments could be, create some truthful agents (usually a majority) and some malicious agents and let them perform transactions. The experiment has a positive result if after a certain time the malicious agents have significant problems to find new partners for interactions and as a result have significantly fewer transactions than the truthful agents. Specific experiments could include the following types of malicious agents

Also interesting is the Sybil-attack in this context. Generally, the mechanism is not concerned with the Sybil-attack itself, so in a first experiment we can show that the Sybil-attack is successful as long as all agents produce the correct, signed blocks. However, in a second experiment we can show a certain application in which we keep track of the balance of agents and restrict the balance to a certain negative threshold value. Further we change the genesis block to start with that negative balance, such that new agents first need to perform real work before consuming work. Because we track what agents know about other agents, agents cannot just perform work for agents with negative balance, because that would make their behavior invalid and would lead to banishment. I feel like this might lead to some restriction of the success of the Sybil-attack.

Finally, we need to show that even with added verification overhead and storage capacity, the horizontal scalability of our system is not diminished. So another experiment should aim at testing the number of transaction in relation to the number of agents. With increasing chain length of all agents the transaction throughput might be reduced. So not only the transaction throughput with respect to active agents but also to running time of the experiment (to which the chain lengths are proportional) should be examined.