CryptoCredibility / ConsensusExistence

A project looking into creating incentive markets to help establish a consensus layer for identity and credibility.
3 stars 1 forks source link

RepSys #1

Open hswick opened 6 years ago

hswick commented 6 years ago

Consensys sort of may have an implementation of this idea with RepSys, which integrates with their uPort project.

https://consensys.net/static/uPort-Wins.pdf https://github.com/ConsenSys/repsys-contracts

0xdewy commented 6 years ago

Yup they are similar, but very different. I need to change the name of this repository as it's not really a measure of "credibility" or "credit" but instead just a measure of whether people think you exist or not.....I'm still thinking of a better name. The consequences of not being credible on this network, means that your node no longer exists as a unique entity. Uport seems to have a credit score. Do you know what Uport does in the case of conflicting identities? I'm not sure how they are going to handle a bot network, supporting eachother with high credit ratings + stolen or manufactured primary identity. This protocol is a prediction market on the general existence of people, based on the gambles of friends and strangers alike....very experimental, but I haven't seen a good solution on this yet.. I've also taken a break from working on this due to it's confusing nature and far-reaching practicability....

Sent from ProtonMail, encrypted email based in Switzerland.

-------- Original Message -------- Subject: [CryptoCredibility/CryptoCredibility] RepSys (#1) Local Time: September 24, 2017 8:54 AM UTC Time: September 24, 2017 6:54 AM From: notifications@github.com To: CryptoCredibility/CryptoCredibility CryptoCredibility@noreply.github.com Subscribed subscribed@noreply.github.com

Consensys sort of may have an implementation of this idea with RepSys, which integrates with their uPort project.

https://consensys.net/static/uPort-Wins.pdf https://github.com/ConsenSys/repsys-contracts

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

hswick commented 6 years ago

Gotcha. Yeah I agree that the intent is very different. I haven't used uPort, but in the case of conflicting identities they would simply use a unique signature like a hash I would imagine. Are you referring to the case where two people are named John Doe but they have two different uPort identities? I see that you have a section on identity dispute where people can vote on who is who. In order to recover your identity with uport you have to list delegates to vouch for your identity which reminds me of this project as well.

And yes, a bot network does seem like it would be able to game the system. I feel like this problem could be solved by making the risks of supporting a non creditable agent would be very high like using your reputation or deposit stake. However, this once again returns to the issue of how to decide who is a fraud in the identity dispute. All of this is making me realize how superficial our identity system currently is.

I've recently been thinking about some kind of multisignature solution where your ID is the aggregation of everyone in your network's ID. Which would provide uniqueness but it would be ever changing which would bring a lot of complexity with versioning. This seems a bit radical, but I think it lines up with the philosophical notion of identity which is very fluid by nature. Where our current self is related to our past self in a continuous manner. I'm thinking some kind of merkle tree that stores the previous hashes. uPort's proxy contract could potentially keep track of this.

0xdewy commented 6 years ago

Yup the delegates portion would be more in line with what I was thinking. The unique signature is fine, but millions of bots could have unique identities....the delegates could also be bots though. Its hard to spread the growth of fake identities unless there is a constant "cleaning" process going on...I'm sure Uport or Civic or someone will think of a better way. Totally. We currently just have a hierarchy of trusted primary identifications, but yaa I reckon the only way to decide who is real is by the metrics that cannot be duplicated.....which is basically just the things in your head and your relationships. Yaa it is tricky deciding though, especially if the decision means you lose your online identity to an impersonator.....others are probably avoiding this approach due to the high risk nature of it...although I assume your social network would protect you in most circumstances. Yaa that's interesting...I guess that would be another way to represent "staking" in one another. I prefer to use a token just to have a quantifiable metric and to really force people to go out on the line for people they trust.

Sent from ProtonMail, encrypted email based in Switzerland.

-------- Original Message -------- Subject: Re: [CryptoCredibility/ConsensusExistence] RepSys (#1) Local Time: September 24, 2017 5:25 PM UTC Time: September 24, 2017 3:25 PM From: notifications@github.com To: CryptoCredibility/ConsensusExistence ConsensusExistence@noreply.github.com Kyle Dewhurst dewhurst.kyle@protonmail.ch, Comment comment@noreply.github.com

Gotcha. Yeah I agree that the intent is very different. I haven't used uPort, but in the case of conflicting identities they would simply use a unique signature like a hash I would imagine. Are you referring to the case where two people are named John Doe but they have two different uPort identities? I see that you have a section on identity dispute where people can vote on who is who. In order to recover your identity with uport you have to list delegates to vouch for your identity which reminds me of this project as well.

And yes, a bot network does seem like it would be able to game the system. I feel like this problem could be solved by making the risks of supporting a non creditable agent would be very high like using your reputation or deposit stake. However, this once again returns to the issue of how to decide who is a fraud in the identity dispute. All of this is making me realize how superficial our identity system currently is.

I've recently been thinking about some kind of multisignature solution where your ID is the aggregation of everyone in your network's ID. Which would provide uniqueness but it would be ever changing which would bring a lot of complexity with versioning. This seems a bit radical, but I think it lines up with the philosophical notion of identity which is very fluid by nature. Where our current self is related to our past self in a continuous manner. I'm thinking some kind of merkle tree that stores the previous hashes. uPort's proxy contract could potentially keep track of this.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

hswick commented 6 years ago

My thinking was that the stake could be represented by the token(s), and the value of your stake is indicative of your credibility. Advantage of this would be that you wouldn't want to trust someone who doesn't have that much at stake in the case of an identity dispute. The disadvantage of this would be it favors who has the most money.

So that means there should be another metric. Obviously a good one would be the number of connections (friends) you have. This is how LinkedIn works, where users value people who have a lot of connections. Once again the disadvantage of this would be that someone could pay for friends (like how people pay for followers).

Another metric could be each users signature on the blockchain. There are interesting metrics like time. A bot who generated a million users in less than a day would be an outlier, while a normal user would have a more organic trail on the blockchain. This offsets the advantage of wealth because of the saying that time is money.

This still leaves the fact that someone "might" be more credible than another user simply by having more at stake. However, there is probably an opportunity cost where having X amount of dollars doesn't make you more credible than someone with X-n amount of dollars.

I think the hierarchy of trusted primary identifications could still be possible and effective on the blockchain. Its just that they would exist as layers on top of the base p2p layer. And they could follow the same set of rules. Essentially a form of recursion.

0xdewy commented 6 years ago

Yaa I was thinking that these tokens shouldn't be able to be bought or sold...only earned through winning the "existence-markets" or whatever we want to call them. This would essentially just incentive people to hunt down bots and reinforce their friends....main problem is you would have to share enough information so that people don't mistake you for a bot....there is different kind of proofs you can do to mitigate this through hashing + proof-of-access, but still too public for my liking. How would signatures mitigate bots? As far as I know you could program "bots" to use the blockchain in all the ways we do.....another thing I think about is to allow bots, but require them to be identified as such...

Sent from ProtonMail, encrypted email based in Switzerland.

-------- Original Message -------- Subject: Re: [CryptoCredibility/ConsensusExistence] RepSys (#1) Local Time: September 25, 2017 8:57 PM UTC Time: September 25, 2017 6:57 PM From: notifications@github.com To: CryptoCredibility/ConsensusExistence ConsensusExistence@noreply.github.com Kyle Dewhurst dewhurst.kyle@protonmail.ch, Comment comment@noreply.github.com

My thinking was that the stake could be represented by the token(s), and the value of your stake is indicative of your credibility. Advantage of this would be that you wouldn't want to trust someone who doesn't have that much at stake in the case of an identity dispute. The disadvantage of this would be it favors who has the most money.

So that means there should be another metric. Obviously a good one would be the number of connections (friends) you have. This is how LinkedIn works, where users value people who have a lot of connections. Once again the disadvantage of this would be that someone could pay for friends (like how people pay for followers).

Another metric could be each users signature on the blockchain. There are interesting metrics like time. A bot who generated a million users in less than a day would be an outlier, while a normal user would have a more organic trail on the blockchain. This offsets the advantage of wealth because of the saying that time is money.

This still leaves the fact that someone "might" be more credible than another user simply by having more at stake. However, there is probably an opportunity cost where having X amount of dollars doesn't make you more credible than someone with X-n amount of dollars.

I think the hierarchy of trusted primary identifications could still be possible and effective on the blockchain. Its just that they would exist as layers on top of the base p2p layer. And they could follow the same set of rules. Essentially a form of recursion.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

hswick commented 6 years ago

Hmmm, that would simplify the game theory aspect by not buying/selling the tokens. However, I think that has the unfortunate consequence of not incentivizing adoption as well as weakening the disincentive for poor behavior. Unless this score had some kind of widespread social meaning, like how a credit score now. However, that is assuming adoption as well.

The signature would be the trail of transactions a user leaves on the blockchain over time. The over time aspect negates the ability to get a high score very quickly. The use of bots today is generally to automate a task to achieve some goal faster.

I don't agree with the outright bot sentiment. One day we will have robot citizens as well who are equal participants in this system. Them being not human, doesn't necessarily make them evil. The bad bots are ones that are trying to pretend to be someone else. Which a human could also do.

Which is why there needs to be a system for detecting identity fraud. There could be periodic checks to ensure that your identity code still represents your identity. It could do this by comparing current data to previous data. It could ask for a picture with the current ethereum blockchain number like when Vitalik proved he wasn't dead. A facial scan could be the biometric data, which could also be obfuscated by a hash or something to protect privacy. Or other types of identity proof that the user enables.

Then there could also be a metric for how many of these periodic checks you have passed, or pending checks. If a user was pretending to be someone else than they wouldn't be doing the voluntary checks, and others could see that.

Also, the number of identity checks like facial scan, phone number, thumbprint, etc. Could be another published metric on their profile.

These features, plus the connections, might seem a little redundant, but I thin kit would make the overall system pretty strong.

0xdewy commented 6 years ago

I agree about the lack of incentive if the token isn't directly monetized....I've played with the idea of having purchasing caps.....whereas earning tokens wouldn't be capped. That would essentially mean that an account couldn't transfer in tokens once they hold a certain amount. They could still sell them however...

Thats not a bad idea with that proof of transaction history, but I wouldn't want people to have to display their transaction history for free. There is something to that idea though. I think I would lean towards allowing bots as well, if they are correctly identified as such.

Yup, I would go as far as only allowing obfuscated proofs and as much redundancy as possible. The system couldn't allow duplicates of any primary identification proofs, so those will have to be resolved. The best way I could think of was to have a sort of market resolution. It would be difficult for a malicious actor to gain several pieces of primary identification/ biometrics.

Sent from ProtonMail, encrypted email based in Switzerland.

-------- Original Message -------- Subject: Re: [CryptoCredibility/ConsensusExistence] RepSys (#1) Local Time: September 26, 2017 9:37 PM UTC Time: September 26, 2017 7:37 PM From: notifications@github.com To: CryptoCredibility/ConsensusExistence ConsensusExistence@noreply.github.com Kyle Dewhurst dewhurst.kyle@protonmail.ch, Comment comment@noreply.github.com

Hmmm, that would simplify the game theory aspect by not buying/selling the tokens. However, I think that has the unfortunate consequence of not incentivizing adoption as well as weakening the disincentive for poor behavior. Unless this score had some kind of widespread social meaning, like how a credit score now. However, that is assuming adoption as well.

The signature would be the trail of transactions a user leaves on the blockchain over time. The over time aspect negates the ability to get a high score very quickly. The use of bots today is generally to automate a task to achieve some goal faster.

I don't agree with the outright bot sentiment. One day we will have robot citizens as well who are equal participants in this system. Them being not human, doesn't necessarily make them evil. The bad bots are ones that are trying to pretend to be someone else. Which a human could also do.

Which is why there needs to be a system for detecting identity fraud. There could be periodic checks to ensure that your identity code still represents your identity. It could do this by comparing current data to previous data. It could ask for a picture with the current ethereum blockchain number like when Vitalik proved he wasn't dead. A facial scan could be the biometric data, which could also be obfuscated by a hash or something to protect privacy. Or other types of identity proof that the user enables.

Then there could also be a metric for how many of these periodic checks you have passed, or pending checks. If a user was pretending to be someone else than they wouldn't be doing the voluntary checks, and others could see that.

Also, the number of identity checks like facial scan, phone number, thumbprint, etc. Could be another published metric on their profile.

These features, plus the connections, might seem a little redundant, but I thin kit would make the overall system pretty strong.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

hswick commented 6 years ago

How would no earning caps but having purchasing caps not be mutually exclusive? Perhaps what they earn is automatically transferred to ETH, whereas the purchasing cap relates to the credibility coin.

What if they only had to display the transactions related to friend (delegate) history? Perhaps there could be a way to obfuscate this proof as well like you mentioned.

Yes. Redundancy would be one of the major factors in the security.