Open synctext opened 6 years ago
Protect - solid algoritm work for the core of a thesis.
Vital dataset we need with real attacks on trust, reputation systems. http://users.eecs.northwestern.edu/~hxb0652/HaitaoXu_files/TWEB2017.pdf Abstract:
In online markets, a store’s reputation is closely tied to its profitability. Sellers’ desire to quickly achieve high reputation has fueled a profitable underground business, which operates as a specialized crowdsourc- ing marketplace and accumulates wealth by allowing online sellers to harness human laborers to conduct fake transactions for improving their stores’ reputations. We term such an underground market a seller- reputation-escalation (SRE) market. In this article, we investigate the impact of the SRE service on repu- tation escalation by performing in-depth measurements of the prevalence of the SRE service, the business model and market size of SRE markets, and the characteristics of sellers and offered laborers. To this end, we have infiltrated five SRE markets and studied their operations using daily data collection over a con- tinuous period of two months. We identified more than 11 thousand online sellers posting at least 219,165 fake-purchase tasks on the five SRE markets. These transactions earned at least $46,438 in revenue for the five SRE markets, and the total value of merchandise involved exceeded $3,452,530. Our study demon- strates that online sellers using the SRE service can increase their stores’ reputations at least 10 times faster than legitimate ones while about 25% of them were visibly penalized. Even worse, we found a much stealthier and more hazardous service that can, within a single day, boost a seller’s reputation by such a degree that would require a legitimate seller at least a year to accomplish. Armed with our analysis of the operational characteristics of the underground economy, we offer some insights into potential mitigation strategies. Finally, we revisit the SRE ecosystem one year later to evaluate the latest dynamism of the SRE markets especially the statuses of the online stores once identified to launch fake transaction campaigns on the SRE markets. We observe that the SRE markets are not as active as they were one year ago and about 17% of the involved online stores become inaccessible likely because they have been forcibly shut down by the corresponding E-commerce marketplace for conducting fake transactions.
Again a short status update:
I ran a first simple experiment which is similar to the other experiments I plan to do. We have 20 honest agents that perform the PROTECT mechanism as described above (however still a simplified verfication). Also there is one dishonest agent who withholds one block from his chain when engaging in an interaction (when the other agent starts the interaction the agent behaves normal, that is why he still has some interactions). Other agents verify the chain and find it not complete. Therefore they reject the interaction with the agent. We let the agents interact for 100 seconds with approximately 1 transaction per agent per second. We see that the dishonest agent has significantly less transactions that the honest agents. I will create more experiments like this, but they only prove the correctness of the mechanism. Next to this, there should also be a scalability experiment which I still need to design. For that I will probably also need to make the software work with gumby to run it on the DAS-5.
Also I have started writing on my thesis.tex, here is the current pdf: report.pdf
@jangerritharms 's novel idea: devise a mechanism which forces agents to disclose their full historical state of their trustchain database at each point in time. This enable the detection of historical dishonest behavior, by allowing a replay of all historical states and decisions. For instance, helping other dishonest agents in the past.
thesis storyline, especially the problem description:
For instance: This works has successfully created a specialized component which has a proven ability to make trust systems better.
Demers, 1987 classic, "Epidemic algorithms for replicated database maintenance". First mention of anti-entropy syncing between agents. Simply sync differences and see how everybody syncs quickly. Great math. 1988 classic: "A survey of gossiping and broadcasting in communication networks"
Real attack datasets are now available, see this Twitter 8000 fake accounts, https://news.ycombinator.com/item?id=9170433
Spam a form of sybil attack #2547?
(btw dishonest agents == dramatic red in thesis figures, green == good)
Updated report.pdf
Quick update: Not so much happened since the last update. I mostly worked on the story for my thesis, how cooperation, reputation systems, tribler, trustchain and my work are related. Also I have reworked my code a little bit, before I had stored on the chain all public keys and sequence numbers that were exchanged, now only two hashes of the exchanges are stored, which still has the same effect as partners can sign the hash of what data they sent and having the hash later on means agents still cannot lie about which data they have. Also I see now that anti-entropy is definitely not the only way to go. We can simply define how many endorsements are required per interaction (some ratio which can be enforced by the honest nodes).
Possible next steps:
Mental note: huge Trustchain dataset and picture; not (yet) in thesis
Good luck today Jan!
Status update report.pdf
This is the current thesis.tex. Next week I will be on vacation so this is pretty much the work that we can look at at the next meeting on monday 16th of July. Have made a start in each of the chapters of the final report. Was not yet able to implement all feedback from the previous round but that will be the next step.
remarks:
Worked mostly on the experiments, updated parts of the introduction and created an example case for my mechanism in chapter 5.
Comments
Final thesis on official repository
Thesis Abstract Trust on the internet is largely facilitated by reputation systems on centralized online platforms. How- ever reports of data breaches and privacy issues on such platforms are getting more frequent. We argue that only a decentralized trust system can enable a privacy-driven and fair future of the online economy. This requires a scalable system to record interactions and ensure the dissemination and consistency of records. We propose a mechanism that incentivizes agents to broadcast and verify each others interaction records. The underlying architecture is TrustChain, a pairwise ledger designed for scalable recording transactions. In TrustChain each node records their transactions on a personal ledger. We extend this ledger with the recording of block exchanges. By making past information exchanges transparent to other agents the knowledge state of each agent is public. This allows to discriminate based on the exchange behavior of agents. Also, it leads agents to verify potential part- ners as transactions with knowingly malicious users leads to proof-of-fraud. We formally analyze the recording of exchanges and show that free-riding nodes that do not exchange or verify can be detected. The results are confirmed with experiments on an open-source implementation that we provide.
Related work: Mustafa Al-Bassam seems to have re-invented our approach in 2019. His work and Celestia combines our 2018 work and prior 2017 Chico implementation along with our 2016 bottom-up consensus model. Brilliant wording: "virtual side-chains", we just called it a scalable ledger :smile: Superior marketing and acquisition of funds then we did!
2004 background Total Order Broadcast and Multicast Algorithms: Taxonomy and Survey
Placeholder issue for master thesis work. End date firm: 16:00, Friday 31 August 2018, Cum Laude potential. Concrete idea: