Tribler / tribler

Privacy enhanced BitTorrent client with P2P content discovery
https://www.tribler.org
GNU General Public License v3.0
4.74k stars 445 forks source link

decentralised non-profit payment services #4044

Closed synctext closed 3 years ago

synctext commented 5 years ago

Trustchain resilience:

offline, non-bank Eurotokens.

synctext commented 5 years ago

For this 2nd meeting, please understand deeply:

plus read prior Martijn PSD2 code and otp code

devos50 commented 5 years ago

Please talk to me to get access to this code. Also, please give your GitHub username so we can assign you to the ticket/add you to the relevant GitHub teams.

synctext commented 5 years ago

Please read all articles in Stannet his issue, especially "Harvard hitting time" paper: https://github.com/Tribler/tribler/issues/2805#issuecomment-283300353 plus upload you meeting discussion notes here.

inconspicuous-username commented 5 years ago

Ok!

inconspicuous-username commented 5 years ago

[Input for 1st meeting.pdf](https://github.com/Tribler/tribler/files/2608134/Input.for.1st.meeting.pdf)

synctext commented 5 years ago

https://www.forbes.com/sites/geraldfenech/2018/11/23/blockchain-3-0-and-coti-next-generation-technology-without-mining/amp/ "We achieve Trustchain protocol security in our network by introducing DSP (double spend prevention) Nodes and applying trust to every participant in the network. With COTI, double spending attacks are not possible at all, so we are much safer than any existing blockchain protocol."

synctext commented 5 years ago

as discussed; recommended reading https://scholar.google.com/scholar?q=sybil+attack+attack+edges. Sybelinfer, sybillimit, Sybilguard, Sybildefender, etc.

synctext commented 5 years ago

link to 8000 fake twitter accounts and .zip download

idea of always duplicating last N trustchain records of your counterparty (n=20 or so). More advanced version is using deterministic assigned witnesses which have backup. System load is reduced when only requiring peer lookup, instead of block replication and lookup; thus simply asking 20 counter-parties and 20 witnesses is sufficient.

devos50 commented 5 years ago

Preventing forks is already being worked on: https://github.com/Tribler/py-ipv8/pull/349

devos50 commented 5 years ago

Somewhat related paper: Secure multiparty PageRank algorithm for collaborative fraud detection

Collaboration between ING, ABN AMRO, TNO and CWI. Their work got accepted in FC'19.

synctext commented 5 years ago

1000^4 problem. 90% offline churn issue. idea discussed: integrate block discovery, trust calculations, and transactions.

inconspicuous-username commented 5 years ago

DISCLAIMER: This is merely an improvement upon PageRank and does not claim to be sybil-resistant, solve double spending or forking problems, or calculates a perfect reliable trustworthiness (but neither does PageRank).

Abstract The proposed algorithm combines block discovery and PageRank calculation, and makes it an integral part of executing transactions. Its implementation only requires a single field to be added to the trustchain blocks.

Most of the time* it acquires the same result as the PageRank algorithm would converge to if ran for an infinite number of random walks.

The algorithm is truly distributed and only requires interaction between the requesting and supplying party. It furthermore only requires interaction when a party tries to initiate a transaction with another party.

The algorithm comes at the cost of an average runtime-complexity of O(E[degree]) and an average space-complexity of O(E[degree]2)**.

The algorithm can be easily adapted to be equivalent to the results of the PageRank algorithm with four-hop random walks.

TL;DR: A short video explaining the algorithm can be found here

A more detailed description of the algorithm can be found here


*The exact conditions required for which the score is incorrect are described in the section β€˜Accuracy’

**Please be aware that E[degree] is the average number of unique one-hop neighbors, and not the total amount of blocks you created with your neighbors. At the moment of writing we had acces to 4.000.000 blocks. In the graph created with these blocks the nodes had an average degree of 41 with a 95th percentile being 75 or lower.

inconspicuous-username commented 5 years ago

The numbers mentioned above are derived from a graph generated from the 4.000.000 blocks I got from Martijn. Below you'll find a histogram of the degree of nodes

histogram

synctext commented 5 years ago

Check-and-drop storage policy

synctext commented 5 years ago

Progress meeting notes:

inconspicuous-username commented 5 years ago

attached you'll find the Adobe After Effects project.

reactive breadth-first pagrank folder.zip

inconspicuous-username commented 5 years ago

First sketch of a possible stand-alone trustchain framework: sketch of possible framework

synctext commented 5 years ago

Current thesis idea: the generic everything-included Trustchain framework. Enterprise grade? Half blocks or block-proposals are ugly. Current idea: first propose a transaction, agreement, then publish. Problem: who is the last signer? Related work: Fair Exchange Signature Schemes "beyond 10k/second transactions" Alternative to half-blocks has superior verification properties. Thus it's possible to demonstrate superiority using performance analysis. idea: formal specs? idea: formal proof of security, assuming at least 1 reliable random witness?

10 ECTS worth of 1 thesis chapter draft called related work then please problem description.

inconspicuous-username commented 5 years ago

as requested I published the experimental repository, I've also created some documentation which serves both documentation of code as well as design decisions/ideas.

inconspicuous-username commented 5 years ago

Plan de campagne for the coming days:

Goal is to do some preliminary performance analysis

inconspicuous-username commented 5 years ago

Quick update on storage experiments

Redis (with persistence):

no. read/writes = 100000  @ 1024 bytes per read/write

Single client:
10000 writes/second
10000 reads/second

Multipe clients
14000 writes/second
15000 reads/second

redis read/write performance is constant with respect to dB size (as long as it fits in RAM)

SQLlite writes:

No. writes = 1000000 @ 1024 bytes per write

When commiting at every write 
5 writes/second

commit every 100 writes:
590 writes/second

commit every 1.000 writes:
4500 writes/second

commit every 10.000 writes:
16000 writes/second

SQLite random reads

1024 bytes per entry:

db size = 1000 samples 
10000 reads/second

db size = 10000 samples
9000 reads/seconds

db size = 100000 samples
8000 reads/second

db size = 200000
7500 reads

db size  = 500000 samples
6900 reads/second
inconspicuous-username commented 5 years ago

quick update on Trustchain experiment

 --- Genesis blocks ---
create_genesis_block took 0.15315101146697999 ms
verify_genesis_block took 0.3724803924560547 ms

 --- Block creation ---
propose_tx_block took 0.14535338878631593 ms
verify_tx_proposal_block took 0.36038408279418943 ms
sign_proposal_block took 0.14305429458618163 ms

total block creation time 0.64879176617 ms

 --- Block verification ---
verify_tx_block took 0.7156700611114502 ms

At this rate we can create 1541 blocks/second, and verify 1397 blocks/second. (This is without the communication overhead, which will dominate the creation and verification times for sure).

synctext commented 5 years ago

Big ambitions:

inconspicuous-username commented 5 years ago

Attached you'll find one of the documents discussed this meeting.

inconspicuous-username commented 5 years ago

Theoretical upper-bound on a malicious transaction succeeding, when using the newly designed witness selection protocol image

inconspicuous-username commented 5 years ago

Graph of the above posted formulas: X-axis is the portion of the network that is malicious, Y axis is the minimum amount of witnesses required to ensure a certain level of security. The different lines are for different levels of security (<1%, <0.1%, <0.01% and 0.001% chance of a malicious trade succeeding). image Mind you that this is only for a single transaction, if the network requires that both the transaction and the last X blocks need to be verified, then the level of security would increase exponentially.

synctext commented 5 years ago

Please order something like 3 different of these Android hardware devices and we'll give you a full refund. http://m.handheldposterminal.com/sale-9883763-handheld-touch-screen-pos-terminal-wireless-credit-card-machine-with-sim-card.html?

inconspicuous-username commented 5 years ago

This is the first draft of a more formal description of my fair witness selection protocol. Both content related and style related comments are highly appreciated.

Fair Witness Selection Protocol - draft 1

Abstract: The Fair Witness Selection Protocol is a novel witness selection scheme that can be used to create a truly scalable distributed ledger. The proposed scheme offers an upper-bound of 0.001% change for fraudulent while only requiring 20 witnesses (if <1/5th of the network is malicious) while also guaranteeing liveliness. Because the number of witnesses required remains constant with respect to the network size, it opens the door for truly scalable DLT. It does so by relying on widely available and well understood cryptographic primitives. This paper provides a mathematical proof on the correctness and liveliness of the proposed scheme, and suggests values for the security parameters under different network assumptions.

inconspicuous-username commented 5 years ago

Please order something like 3 different of these Android hardware devices and we'll give you a full refund. http://m.handheldposterminal.com/sale-9883763-handheld-touch-screen-pos-terminal-wireless-credit-card-machine-with-sim-card.html?

  1. Android based device I prefer ordering it from Dutch/European webshops to ensure compatibility with our bankcards:
  2. stand-alone unit controlled via bluetooth In my opinion this a hihger chance for succeeding, as it only requires us to reverse engineer the communication (i.e. opening the APK to see what messages it send) where for number 1 the communication is probably embedded on a lower level in to the device/os.
  3. stand-alone unit with bluetooth same reasons as above
grimadas commented 5 years ago

Some formal comments on the first draft:

synctext commented 5 years ago

Comments:

Experimental progress:

ToDo:

inconspicuous-username commented 5 years ago

Every node is set up to sleep for a random time (between 0 and 1 seconds) and then randomly select a node. with an average time of 0.5 seconds every node creates about 2 transactions a second.

Test network without fwsp on I7 23600k 16GB: 50 nodes give 101 transactions/second 100 nodes give 201 transactions/second 200 nodes give 401 transactions/second 300 nodes give 591 transactions/second 400 nodes give 645 transactions/second 500 nodes give 634 transactions/second

The saturation and decline in transactions is purely due to computational bottlenecks (this is for 400 nodes): cpu usages

synctext commented 5 years ago

Progress of thesis chapters and coding:

inconspicuous-username commented 5 years ago

Link to thesis: Link

synctext commented 5 years ago

Progress of thesis meeting:

inconspicuous-username commented 5 years ago

I created a simple web interface similar to a online banking interface. The node can be started with the command to also start an RESTapi endpoint, which the webpage uses to retrieve the current balance and request the last X transactions.

Home page: Overview Panel

New transfer screen: image

Transaction overview: image

inconspicuous-username commented 5 years ago

Every node is set up to sleep for a random time (between 0 and 1 seconds) and then randomly select a node. with an average time of 0.5 seconds every node creates about 2 transactions a second.

Test network without fwsp on I7 23600k 16GB: 50 nodes give 101 transactions/second 100 nodes give 201 transactions/second 200 nodes give 401 transactions/second 300 nodes give 591 transactions/second 400 nodes give 645 transactions/second 500 nodes give 634 transactions/second

Some optimizations have been done on the communication protocol, reducing the best-case message count from 6 to 3. With this optimization the maximum achievable throughput is now increased from 634 to 861 transactions per second (a 36% increase!).

inconspicuous-username commented 5 years ago

Quick update: Fixed partner throughput (every node picks one partner, and only transacts with that). image

devos50 commented 5 years ago

@JetseBrouwer it might be interesting to see how throughput is affected when one chooses a random transaction counterparty.

inconspicuous-username commented 5 years ago

@devos50 If the counterpart is chooses at random there is exists a possibility that it's already busy. The probability of the other party being free increases if they all use a random back off time, this backof time however results in nodes sitting idle. The maximum I've reached with random partners so far was 920 tx/s but which significantly more nodes (this was before certain optimization tho). Tomorrow I'll arrange some new test results! another option worth considering is maybe trough the use of a Mild backof algorithm, so it automatically somewhat settles around an optimum.

devos50 commented 5 years ago

If the counterpart is chooses at random there is exists a possibility that it's already busy.

True, but this is probably closer to a realistic workload. It is of course still interesting to explore the upper transaction throughput in the situation where transaction counterparties are fixed and pre-determined.

I also wonder about the impact of the transaction size on throughput and (global) communication cost. Might be worth exploring, depending on your scope and time left πŸ‘

inconspicuous-username commented 5 years ago

Quick test with random partners, each node idles a random time between 0 and 1 seconds (results in average of 2 tx/s per node): image

synctext commented 5 years ago

meeting minutes:

inconspicuous-username commented 5 years ago

Specially for the demonstrator I took away all the mechanism that prevent overspending (Front-end, back-end, and counter-party's back-end now all accept a negative balance but witnesses won't).

Before trying a malicious transaction: image

When proceeding it warns me that I don;t have sufficient funds, but offers me an option "send anyway" image

Which is refused by the network; image

inconspicuous-username commented 5 years ago

When formalizing the protection against forking and block withholding attacks I found out, that fwsp is even more effective against these attacks.

Excerpt from thesis: "... Where as passing inadmissible transaction will occur only when more than two thirds of the witness set in malicious, a single honest node in the witness set already compromises the [block withholding] attack. This has to do with the unforgeability and non-reputability of the signatures used to create the transactions. Thus, to successfully hide a block all of the witnesses in the witness set must be malicious. When having a network of which 1/5th is malicious a minimum witness set of 23 nodes will result in an inadmissible transaction having a <0.001% chance of being accepted, while it only has a <1E-16 % chance of successfully hiding a block."

inconspicuous-username commented 5 years ago

To have a cleaner demonstration, I've created a new repo, with only the required material and a manual on how to get things running:

The repo can be found here

inconspicuous-username commented 5 years ago

A selection of results of the DAS5 experiments:

A vector version of all graphs can be found in the link below (unlimited zooming!)

Transaction time versus Network size

Here an experiment was setup where a single node would execute transactions as fast as possible, and the time it took from start to finish was recorded per transactions.

Expected result: As the witness group doesn't change, the transaction time should not increase as the network size increases. test_highligts_06 As seen in the graph the confirmation times stay roughly the same, independent of the network size.

Transaction time versus Portion of malicious nodes

Here the experiment was setup in a network where an increasing portion of the network refused to sign any transaction (valid or invalid), simulating a denial of service attack. To emulate the most effective attack, the malicious node would let the connection time out, instead of close the connection. making the transacting party wait for the time out period (100 ms for this test).

Expected result: The greater the portion, the more transactions will have a transaction time higher then normal. the time would increase in steps of 100 ms, due to the time out. What will happen at 1/3 is unclear, as the upper-bound is up to but not including 1/3. test_highligts_05

The outliers indeed increase with the expected ~100 ms steps. The results for the 1/3 malicious nodes seem okay'ish, however this is not the case as <10% of the transactions actually succeeded, where the others had a success rate of 100%.

throughput versus Network size

This experiment was setup in a way where a variable amount of nodes entered the network and randomly select a node to transact with. After this transaction the nodes sleep for random amount of time drawn from a uniform distribution between 0 and 1.

Expected result: As the network size shouldn't have any impact on the transaction time, the throughput should linearly increase with the network size. test_highligts_08

While this is probably the most boring of the three graphs, it is the most powerful. Here it shows that the proposed ledger indeed scales linearly!!! In the attached pdf the error bars can be seen better as they are so tiny, they're near invisible.

graphs in vector format for unlimited zooming!!!

synctext commented 4 years ago

Current thesis .PDF from overleaf: Trustchain__A_Truly_Scalable_Distributed_Ledger (2).pdf

Please add:

synctext commented 4 years ago

Overall thesis judgement: Significantly improves the scientific depth if you can compare your work against the state-of-the-art somehow in a single apples-to-apples comparison. Hard to compare Trustchain to anything; R3.

synctext commented 4 years ago

Remarks on Trustchain__A_Truly_Scalable_Distributed_Ledger (5).pdf