CirclesUBI / whitepaper

Circles Protocol Whitepaper
https://joincircles.net
Creative Commons Attribution Share Alike 4.0 International
348 stars 23 forks source link

Near-match to my sandbox + proposed use of "Trust Is Risk" for Sybil resilience #1

Open veox opened 6 years ago

veox commented 6 years ago

Hi! I've held an interest in on-chain UBI systems for a while, including Circles when it first came up.

I've got a rough sketch that matches quite closely the system described in the overview. At a glance, the biggest difference is the minting unit-duration: a second instead of a minute. In other words, not much difference. :)

You can find it at gitlab.com/veox/oobiqoo. It's placeholders all over. Probably not of much interest code-wise, since I see you've already got some form of prototype.


Regarding the "fake accounts" section, I urge you to look at "Trust Is Risk" by Orfeas Stefanos Thyfronitis Litos (@OrfeasLitos) and Dionysis Zindros (@dionyziz).

The claim made in that section requires much more rigid demonstration than one example and one diagram. Reliability of the system on a protocol level (as compared to implementation level) depends on the claim being justifiable for all possible cases; and the limitations being clearly laid out.

A stub issue in oobiqoo has a couple links (feel free to dump more there if it's deemed irrelevant here):

The github repo for TIR is https://github.com/decrypto-org/TrustIsRisk.

edzillion commented 6 years ago

Thanks for the suggestions Noel

The claim made in that section requires much more rigid demonstration than one example and one diagram. Reliability of the system on a protocol level (as compared to implementation level) depends on the claim being justifiable for all possible cases; and the limitations being clearly laid out.

Agreed, we definitely need to expand this section or start on a paper just talking this issue; w/ various use cases and observations on each. Perhaps we can get @koeppelmann to weigh in?

apmilen commented 6 years ago

Thanks for the TIR links, I'll review this paper soon and start to think about how it relates to Circles. Totally agree that more rigor is needed in general. This overview was just a quick thing I put together in preparation for a UBI conference last month. We're planning a longer term effort to flesh out these ideas and validate them from multiple perspectives (i.e. both theoretical and experimental).

The current prototype indeed mints on a per-second basis. I just made it per-minute in that example for ease of explanation to newcomers. In a later revision I'll definitely fix this and add more detail.

dionyziz commented 6 years ago

As believers in crypto-based UBIs and authors of the TIR paper, @OrfeasLitos and I would be happy to help with any questions there may be in regards to our paper and its applications to your scheme.

TomTem commented 6 years ago

About the trust is risk scheme: what if I only have 1 friend, and many links down the line there are several shops that sell the same things for the same price. The trust is risk network provides me a number that tells me which shop is more trust worthy, so I buy there. Now I never receive the product, or the product is bad. No problem, I now know never to buy there anymore. What I don’t understand is how the ‘rating’ of the bad shop is affected. I could cut the trust with my only friend because I linked to the shop trough him, but then I would have no links to connect to the other good shops. And even if I would have other trusted links to other shops, even then I would only hurt the trust of my friend, but I would prefer to hurt the trust of the bad shop many links down the line. Did I misunderstand the concept? Or are these issues that need to be solved?

OrfeasLitos commented 6 years ago

You have created an example that highlights very well the problem that arises when Alice has only one trusted friend, Bob: As long as Alice is restricted to just this friend, it is impossible for her to exhibit fine grained preferences. This has two negative consequences: a) Alice delegates her decisions entirely to Bob. She has virtually no choice over who to trust indirectly. b) Alice cannot show her preferences to other users. More specifically, if Charlie decides to directly trust Alice, the information he gets from her is that Bob is the only person in the world that is worthy of her direct trust. Charlie might just as well trust Bob directly and gain exactly the same information.

An intuitive way for Alice to engage more with TIR is to start trusting directly merchants with whom she transacted successfully. This way she diversifies and fine-grains her preferences, depends less on Bob's whims and additionally merchants that observe her behavior have more incentive to refrain from cheating her, since it is in their best interests to obtain her direct trust. Furthermore, if Alice establishes a good name for her preferences, it becomes meaningful for Charlie to directly trust her, since he will get quality insight into which merchant is trustworthy.

Hope this helps!

On 19/11/17 10:03, TomTem wrote:

About the trust is risk scheme: what if I only have 1 friend, and many links down the line there are several shops that sell the same things for the same price. The trust is risk network provides me a number that tells me which shop is more trust worthy, so I buy there. Now I never receive the product, or the product is bad. No problem, I now know never to buy there anymore. What I don’t understand is how the ‘rating’ of the bad shop is affected. I could cut the trust with my only friend because I linked to the shop trough him, but then I would have no links to connect to the other good shops. And even if I would have other trusted links to other shops, even then I would only hurt the trust of my friend, but I would prefer to hurt the trust of the bad shop many links down the line. Did I misunderstand the concept? Or are these issues that need to be solved?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/CirclesUBI/docs/issues/1#issuecomment-345505128, or mute the thread https://github.com/notifications/unsubscribe-auth/ARva-znUOHQ_XE4VamRXafiok-3J4OkSks5s3_zdgaJpZM4QZiJ6.

TomTem commented 6 years ago

Thx for your answer!

I used the example of having only 1 connection to show that Alice has no way to convey her bad shopping experience. But you are right, in such a case Alice should try to have more connections.

Maybe a better example to clarify my problem would be as follows: 1000 happy customers are connected to the shop, they all have done good purchases in the past. These 1000 have each 10 trust connections to their friends, and each of them have 10 connections as well. If you go down 3 levels, you have a million users. Alice has a direct trust connection to a few of these million users. Let’s say there are 1000 users like Alice that have trust connections to some of these million users that are 3 hops away from the shop. Now let’s say all of these 1000 new shoppers did not receive their product. What should these 1000 unhappy users do to warn the network that the shop is not trust worthy anymore? They can cut some of their friends trust connections, but that way they also cut their indirect connections to good shops. And you only have so many friends to which you would trust your money. If you have to cut trust for every bad shopping experience, then you would quickly run out of friends …

So in the second part of your answer you say that these 1000 unhappy users should make more direct connections to merchants. That way if they have an unhappy shopping experience, they can cut the trust with the merchant, and they don’t need to cut trust with one of their friends.

But then I think then the system might become unpractical. I think it could work if you only have to risk some of your money to some close friends and family. But if you have to start trusting all businesses that you interact with, you would need a lot of money to put in the shared accounts.

I really like the idea, and I’m just trying to figure out how it could work in practice.

OrfeasLitos commented 6 years ago

Imagine this trust graph:

Alice -10-> Bob -10-> Shop

There are several ways an unsatisfied customer Alice can show her disappointment, the most effective of which combines in-game with out-of-band action.

a) (In-game) Just stop directly trusting Bob. You have already pointed out this approach. Something similar to this will be part of the ui as we imagine it. b) (Out-of-band) Shout out on relevant forums/her social media about the event. This does not directly change the trust graph. c) (Combination) Inform specifically the people on the trust path(s) that connect her to the Shop (in this case only Bob) and urge them to stop trusting it. She could also threaten him that she will stop directly trusting him if he doesn't take this course of action, which may be a good idea anyway if Bob doesn't find the mishap that happened to Alice significant.

Two things to point out here:

1) Something that is not very clear from the fc17 paper is this: The proposed method for Alice to produce the money needed for paying the Shop is to reduce her direct trust to Bob in a way that her indirect trust towards the Shop is reduced by the price of the product and then use the money saved from this reduction to pay the Shop. Reinstating her direct trust towards Bob is a step that Alice should take after receiving the product and verifying her satisfaction with it. The relevant user interface would be as follows: a) Alice locates the product. Its price is less than her indirect trust towards the Shop, so she can buy. b) Alice hits 'Buy'. This reduces some of her initial direct trust and directly entrusts the price of the product to the Shop, as explained above. The interface says something in the lines of "Verify product integrity" in a yellow button. c) Alice receives the product and hits the yellow button. It turns green saying "Product verified!" and the reduced direct trusts get reverted to their original states, using money from Alice's private wallet (non multisig funds). If Alice wishes to, she can manually increase her direct trust to Bob or even to the Shop further. This method is better described in my master thesis, which is in the same repository, named thesis.pdf

2) Trust is Risk is not a silver bullet; it proposes a very different user experience and is complementary to other rating measures, such as human-written reviews. We do not know yet what dynamics can arise if many people move money around based on information it provides. Until we put it to action and a substantial amount of people start using it, we can only speculate. It specifically suits a decentralized setting where users are not required to trust some specific party (e.g. ebay) to keep track of some star-based ratings that are the same for a particular shop, no matter which user views them.

Feel free to ask anything else!

On 19/11/17 17:22, TomTem wrote:

Thx for your answer!

I used the example of having only 1 connection to show that Alice has no way to convey her bad shopping experience. But you are right, in such a case Alice should try to have more connections.

Maybe a better example to clarify my problem would be as follows: 1000 happy customers are connected to the shop, they all have done good purchases in the past. These 1000 have each 10 trust connections to their friends, and each of them have 10 connections as well. If you go down 3 levels, you have a million users. Alice has a direct trust connection to a few of these million users. Let’s say there are 1000 users like Alice that have trust connections to some of these million users that are 3 hops away from the shop. Now let’s say all of these 1000 new shoppers did not receive their product. What should these 1000 unhappy users do to warn the network that the shop is not trust worthy anymore? They can cut some of their friends trust connections, but that way they also cut their indirect connections to good shops. And you only have so many friends to which you would trust your money. If you have to cut trust for every bad shopping experience, then you would quickly run out of friends …

So in the second part of your answer you say that these 1000 unhappy users should make more direct connections to merchants. That way if they have an unhappy shopping experience, they can cut the trust with the merchant, and they don’t need to cut trust with one of their friends.

But then I think then the system might become unpractical. I think it could work if you only have to risk some of your money to some close friends and family. But if you have to start trusting all businesses that you interact with, you would need a lot of money to put in the shared accounts.

I really like the idea, and I’m just trying to figure out how it could work in practice.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/CirclesUBI/docs/issues/1#issuecomment-345533636, or mute the thread https://github.com/notifications/unsubscribe-auth/ARva-0vDTUtFB0GQ1etyOEfaKreVtDYEks5s4GPCgaJpZM4QZiJ6.

TomTem commented 6 years ago

It's more clear now, thx.

Maybe you could make the out-of-band feedback also in-game. For example each user like Alice could give feedback + some rating per transaction (inside her own node/wallet). Everyone that connects trough her can use that feedback + rating.

For example when the algorithm finds several paths to the shop to calculate the TIR numbers, it might as well also collect the user ratings on these paths to provide a second number to the buyer. Because trust and satisfaction is not necessarily the same, so having this second separate number could be interesting. The downside is that you will only have those ratings on the paths that connect you to the shop, but the upside is that you can be certain that these few ratings are genuine.

(Or maybe you could even go and collect all the ratings for the shop in the network, but that might be too expensive to calculate …)

OrfeasLitos commented 6 years ago

That's a good idea, we'll keep it in mind!

It's not obvious how to protect the resulting star rating from sybil attacks though. For example averaging the star ratings of all the nodes implicated in the calculation of the indirect trust is definitely not a good idea, because an attacker could create a very long chain of sybil nodes, imposing thus her opinion.

Do you have any particular thoughts on that?

Thanks for that, Orfeas

On 20/11/17 20:02, TomTem wrote:

It's more clear now, thx.

Maybe you could make the out-of-band feedback also in-game. For example each user like Alice could give feedback + some rating per transaction (inside her own node/wallet). Everyone that connects trough her can use that feedback + rating.

For example when the algorithm finds several paths to the shop to calculate the TIR numbers, it might as well also collect the user ratings on these paths to provide a second number to the buyer. Because trust and satisfaction is not necessarily the same, so having this second separate number could be interesting. The downside is that you will only have those ratings on the paths that connect you to the shop, but the upside is that you can be certain that these few ratings are genuine.

(Or maybe you could even go and collect all the ratings for the shop in the network, but that might be too expensive to calculate …)

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/CirclesUBI/docs/issues/1#issuecomment-345813615, or mute the thread https://github.com/notifications/unsubscribe-auth/ARva-2P8aTNIZw0PdUSxuvVIEQZ__54Eks5s4drxgaJpZM4QZiJ6.

TomTem commented 6 years ago

Ah I see, the shop owner could make a fake account, get his friends to trust it, and then create a long chain of fake accounts to the shop with good ratings.

Maybe you can use weighted ratings (closer to Alice are more important than close to the shop) … I’ll think about it some more …

OrfeasLitos commented 6 years ago

We've been there and it gets complex fast. Don't hesitate to get in touch if you have any even remotely compelling idea though!

On 21/11/17 22:12, TomTem wrote:

Ah I see, the shop owner could make a fake account, get his friends to trust it, and then create a long chain of fake accounts to the shop with good ratings.

Maybe you can use weighted ratings (closer to Alice are more important than close to the shop) … I’ll think about it some more …

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/CirclesUBI/docs/issues/1#issuecomment-346178206, or mute the thread https://github.com/notifications/unsubscribe-auth/ARva-8ZKPgSZurKdPFc3leYmGmDw5aliks5s40rUgaJpZM4QZiJ6.