wetube / bitcloud

Bitcloud Project
http://bitcloudproject.org
MIT License
613 stars 56 forks source link

Critically vulnerable to Sybil attacks #5

Closed mappum closed 10 years ago

mappum commented 10 years ago

It seems the Bitcloud system ensures trust and reliability by linking actions to identities (assoicated with a keypair and an IP address), and banning identities that do not obey the "laws". The blockchain is also based on the identity system by adding a block to the chain when 80% of the identities agree with it.

Using identities in a decentralized system opens up major vulnerabilities to Sybil attacks (where an attacker forges identities to appear as if they are many people on the network).

Bitcloud makes an attempt to prevent Sybil attacks by linking identities to IP addresses, but this is not a viable solution. If you must use a certain IP to spend the money associated with your identity, then when your ISP assigns you a new IP (which they frequently do), your money is suddenly unspendable. Also, if your ISP assigns you an IP that has been previously banned by the actions of some attacker, you temporarily cannot access the Bitcloud network.

Many other attacks are present, such as generating enough identities to be 80% of the network, and then having full control over what blocks are added to the chain and the rewards being paid out.

tl;dr: Bitcloud is broken.

JaviLib commented 10 years ago

The IP is attached to a name for only 10 minutes max, which is the time that a new block takes to generate. For nodes, there can be only one IP per name, for users we can agree in a number. Please comment any further issues you see.

mappum commented 10 years ago

OK, then bans will only last a maximum of 10 minutes (the attacker will generate a new identity once their IP is not banned). Which doesn't make a big difference anyway, as an attacker can control the blockchain and prevent themselves from being banned in the first place (and generate themselves money and ban legitimate users).

mappum commented 10 years ago

Limiting to one identity per IP is also problematic, as it prevents users from connecting with a shared IP, and does not solve the Sybil problem as an attacker can just do it with a botnet.

voronoipotato commented 10 years ago

I agree mappum if you want this to work we need to either have a "buy in cost" so you can't just create 100 users, or sacrifice some privacy.

That way you break the "law" and pay a "fine".

Mac addresses are better but still not good, IP addresses are terrible.

voronoipotato commented 10 years ago

@icryptic I don't see how that's constructive or even relevant. Unless maybe you have a link to the "better way"....

voronoipotato commented 10 years ago

@icryptic then stop contributing.

mappum commented 10 years ago

@voronoipotato Mac addresses are no better, since they are defined in software. Any attacker can trivially reset it. Good ways to solve these things are proof of work (the reason Bitcoin mining is Sybil-proof), or maybe fidelity bonds (but you would need coin generation to work for that to work).

@icryptic I think this project aims for a worthy goal, but it does not have any steps toward the solution as of now.

jcoffland commented 10 years ago

@mappum, I totally agree with your assessment. The whole "Proof of Bandwidth" concept is far from figured out.

What about a pay for bandwidth system tied to a nice coin such as PeerCoin or BitCoin for that matter. If you have bandwidth and desirable content you could put it up for sale. Users could pay an amount of the CryptoCoin in order to get content. The more they pay the faster they get it because higher speed content providers will be able to charge higher rates. The cost for downloaders would have to be very low but could add up to substantial sums for providers overtime. I.e. micropayments.

So how do you pay for content? You could divide the content into blocks. If you want to buy a block of content you simply download it from a provider but it comes encrypted. You must then pay to the providers CryptoCoin address to unlock it.

This could go wrong in two ways. 1) The buyer downloads the block but doesn't pay. They gain nothing but they do manage to waste the bandwidth of the seller. Seller then blacklists that IP for some time. 2) The seller accepts the payment but does not deliver the key to unlock the block. So you don't download from them again. The key to mitigating the cost of bad players would be in splitting up the files in to many small blocks and downloading these blocks from many locations, like with bittorrent. Not only does this reduce your risk it also increases your download speed. Perhaps you could consider a high number of transactions at a CryptoCoin address as a sign of a good reputation.

The whole thing is really only successful if it works better than bittorrent for a small cost or it provides a enough of an incentive for those providing the bandwidth to move to the new System. Integrating services such like YouTube and Facebook on top of any kind of new network is non-trivial. It's very easy for the developers to claim they will do this. Actually producing working software is another thing altogether. Regardless, there's nothing wrong with talking about ideas and getting people excited about the future.

Edit: I think my proposal, although drastically different from BitCloud's plan, avoids most of the problems of a Sybil attack.

mappum commented 10 years ago

I think your approach is realistic, I have been thinking about doing something similar using Bitcoin micropayment channels. The blocks of data would be some small amount (10kB maybe), and the downloader could be constantly increasing the payment as the data is being downloaded. If he stopped increasing the payment, the seller could stop sending data and very little bandwidth would have been wasted. If the seller stopped sending the data or sent invalid data, the downloader would stop paying, and very little money would be wasted.

jcoffland commented 10 years ago

One efficiency issue is that if you need to make lots of fast payments then it would a) bloat the CryptoCoin's blockchain and b) require a very fast CryptoCoin round. There is something I vaguely remember that's related to how they generate vanity CryptoCoin address where two parties can create a key pair together and by combining the results one party ends up with a key pair with out the other knowing the secret. Perhaps some scheme like this could be used where the downloader sends money to many different accounts for which no one yet knows the complete private key but the public key is known. Then the downloader sends the missing information to the provider who can then form the complete secret keys and access the money instantly. It needs more thought but I think there's a way to do this. Hand wave, hand wave.

mappum commented 10 years ago

None of the things you described apply to micropayment channels. They are a trustless way to combine very quick and efficient transactions into only one actual transaction on the blockchain (which also solves the problem of a miner fee being much higher than the volume of the money being transferred). They are pretty cool, you can read more about them here: https://code.google.com/p/bitcoinj/wiki/WorkingWithMicropayments

lance0 commented 10 years ago

If you ask the folks Tox they have some ways to get around certain types of network attacks, the network is constructed similarly.

JaviLib commented 10 years ago

We have redesigned all the proof of bandwidth thing. Please, have a look at the actual paper in the repo: proof-of-bandwidth.org

I think we may have a real soltution here. Some real experts are giving feedback in #bitcloud-dev. Please contact me by mail of talk in the forum about your interesting ideas.

http://talk.bitcloudproject.org/index.php?board=1.0

jcoffland commented 10 years ago

@mappum, actually the micopayment channels thing you linked was pretty much exactly what I had in mind. Sounds like a viable solution. Should be reviewed further and simmed.

jcoffland commented 10 years ago

@LiberateMen, where is this "real solution." You've been saying this from the beginning and frankly I'm starting to mistrust you. I looked through your discussion forums and all I see are a lot of incomplete ideas. There's nothing wrong with this. In fact I think it's great but you should admit that it is still not really worked out yet.

JaviLib commented 10 years ago

@jcoffland please look at

https://github.com/wetube/bitcloud/blob/master/proof-of-bandwidth.org

We will be honored if you want to help us finding all the possible flaws.

Yes, I admit so, look at the Warning at the very begining of the bitcloud.org file. It states clearly that it is not finished yet.

Thank you for your feedback.

mappum commented 10 years ago

@LiberateMen Your new "solution" does make a Sybil attack less viable coming from one PC, but it is still just as viable from a botnet.

I am with @jcoffland, and I am amazed this project generated as much buzz as it did. It seems you only need to state your goals rather than provide a solution to get people excited.

JaviLib commented 10 years ago

@mappum please go ahead and fully explain so.

jcoffland commented 10 years ago

@LiberateMen, I appreciate your admission. I tried to register at talk.bitcloudproject.org but after two attempts I still haven't received the verification email. Oh well, I think it's better to discuss this kind of thing in a public place where the original authors cannot censor the information.

A really important point, IMO, is that even if you had a perfect proof-of-bandwidth there are still at least four problems that proof-of-bandwidth does not address:

1) proof-of-bandwidth now, says nothing about bandwidth in the future. 2) proof-of-bandwidth does not mean the entity will actually use that bandwidth in your favor. 3) proof-of-bandwidth is not a proof of content. With out content bandwidth is pointless. 4) bandwidth is a shared resource between two parties not something owned by one entity.

Regarding the final point. High bandwidth between A and B says little about bandwidth between A and C or any other two points on the network. There probably is some correlation but it's no proof.

I'm worried proof-of-bandwidth is nothing more than a catchy name which has glommed on to the successful proof-of-work and proof-of-stake ideas.

JaviLib commented 10 years ago

@jcoffland please check the spam folder.

1) we are looking for meshnet at version 2.0. Some meshnet projects have contacted us and when time comes, we will integrate. 2) it is not our business to analyze what people do with bandwidth 3) content is signed and we have the Storage Law that guarantiess content. If it is nor provided, node is penalized. 4) look at the routing section

We have to demonstrate that this is going to work, but thank you for your feedback.

jcoffland commented 10 years ago

@JavierRSobrino naturally I checked my SPAM folder.

1) Hmmm...

2) Of course it's important that people actually do with the bandwidth what they promised they would do.

3) Ok, let's look at the "Storage Law": re: http://talk.bitcloudproject.org/index.php?topic=10.0

Ok, so some third parties whom you choose to trust have a list of files that are supposedly held by a target node. These lists presumably contain the, name, location and some sort of hash of the file contents. When you download the content it is routed through other nodes which check the hash and verify that the provider did indeed have the data they said they did. If the hash doesn't match nodes can be penalized by the network.

This system would be a) easy to snoop and b) easy to game. Anyone could become a router and monitor what you download and/or they could lie about the results or corrupt the data themselves. This would make legitimate nodes look bad. If there are several steps in the routing then it would be easy for a small number of nodes to mess up the whole network. Sounds like there are now even more opportunities for Syble attacks.

4) More unfinished ideas.

It's important to remember that one big reason BitCoin has been so successful is that the concept is fairly simple. Each time you sew another arm on to this Frankenstein project you make it more vulnerable to attack.

JaviLib commented 10 years ago

Hi, thanks for your feedback.

3) This is the plan for unprotected routing. All the content is sent encrypted, it doesn't matter if it is protected and unprotected, so the "man in the middle" attack doesn't really apply.

4) There must be a time in which simple ideas evolution.

jcoffland commented 10 years ago

3) So then how would the intermediate nodes verify that the content was correct. This seems like major flaw in the idea proposed on that topic page.

4) Do you mean "evolve?" Evolution is a slow process.

Dr-Syn commented 10 years ago

I'm working through ways to handle the Sibyl issue on the forum.

In summary:

Expensive blockchain-registered and continually-reregistered CAs sign certificates required to make a connection.

All connections are required to be made by way of a certificate signed by a trusted CA. All connections are encrypted.

Every user has the ability to trust or detrust CAs at will, such that CAs are motivated to maintain revocation lists for misbehaving/malicious/sibyl users, lest their entire CA be de-trusted and hence cut out of the system.

CAs are also motivated to screen users before they endorse them, such that registration of sibyls, spam accounts, etc. will be less likely.

Making the CA process expensive (in time and resources) to register raises the threshold cost for the prospective sibyl-certifier.

Requiring CAs to maintain that registration process in order to remain valid raises the threshold further, and discourages registration of multiple CAs.

Making it difficult to become trusted and trivial to drop trust and exclude a CA tips the balance such that sibylling will be prohibitively expensive for all but the most motivated sibyls; if someone is sufficiently motivated to sibyl under those circumstances, then they will end up expending more resources supporting the network than they'll use up being knocked back.

jcoffland commented 10 years ago

What if the attacker runs a trusted CA? To perform the attack:

1) Create a CA, get everyone to trust it by being honest and cheap. 2) Create a large number of users. 3) Use these users to declare all other CAs invalid. 4) Perform additional sibyl attacks.

Each CA becomes a potential weak link. Interesting idea though.

Dr-Syn commented 10 years ago

User de-trusts of CAs don't propagate up to CAs.

CA de-trusts of CAs propagate down to users.

On Tue, Jan 28, 2014 at 2:09 PM, Joseph Coffland notifications@github.comwrote:

What if the attacker runs a trusted CA? To perform the attack:

1) Create a CA, get everyone to trust it by being honest and cheap. 2) Create a large number of users. 3) Use these users to declare all other CAs invalid. 4) Perform additional sibyl attacks.

Each CA becomes a potential weak link. Interesting idea though.

Reply to this email directly or view it on GitHubhttps://github.com/wetube/bitcloud/issues/5#issuecomment-33532716 .

jcoffland commented 10 years ago

@Dr-Syn, how do people decide not to trust a CA? Whatever the method the propagation of distrust must take some time and has to be based on the opinion of other trusted users in the network. These opinions of trust may be based on some calculation or rule checking that could be automated. The system of trusting or not trusting CAs is itself attackable.

There may be merit to this CA idea but you must be careful not to just push the same problem further down the line. You risk creating a more complicated system that has the same problems.

cbbcbail commented 10 years ago

@jcoffland

This is exactly what I have been suggesting. There needs to be an obvious measure of the trust ability.

I call it simply their "Reputation" this reputation would be calculated based upon many things the user has done and we could define an algorithm to help define the reputation. It could take into account several variables such as how many people already trust them, how many people have un trusted them, their QoS, etc. to define the Reputation. This would simplify the process and make it easier for users to decide if they should trust someone as well as help the network decide punishments and rewards.

Dr-Syn commented 10 years ago

@jcoffland

  1. Users may de-trust a CA at any time by choosing to not accept that CA's certificates. A button on the interface will suffice.
  2. CAs may choose to do the same by the same means and will propagate the revocation list to their associated users at the time of user sign-on.

Yes, there will be some latency there which could conceivably be an opportunity for a malicious actor. However, given that CA connections are required for most transactions with the bitcloud, that latency will likely be minimal, and the risk acceptably low.

Tying the CA certification to the blockchain, and requiring continual CA participation in said blockchain, is the only feasible means of assuring any level of trust in this network without the use of a central authority--and the goal of this particular network is to avoid all central authorities.

A CA 'in good standing' with the blockchain is assumed to have a minimal level of trust; if other CAs see malicious behavior and drop their trust in that CA, then evaluations of the trust metric thereafter will show it to be less trustworthy.

If you can think of a simpler, more robust method of handling this that will also ensure the full encryption of all traffic and the ability to track payments, feel free to suggest it.

jcoffland commented 10 years ago

@Dr-Syn, My problem is with the hand waving parts. I.e. "that latency will likely be minimal, and the risk acceptably low." How do you know this to be true.

My lack of a better idea does not make the CA idea sound. How do you defend against this attack?:

1) I create several CAs and build up their reputation overtime. Or I convince several CAs in good standing to collude with me. 2) We black list all other CAs and this propagates though the network.

Also, my original attack still works. Take out step 3) if you like. How do you defend against even a single CA in good standing who decides it will be very profitable to go rogue?

I want to see this work as badly as you do. That's why I'm picking it apart.

cbbcbail commented 10 years ago

@jcoffland

Attack 2: 1.) We can't stop this. There is no way to stop CAs from doing their job well and there shouldn't be. 2.) This, however, shouldn't be possible. As Dr-Syn said, CAs do not have the ability to "black list" other CAs.

Attack 1: 1.) Once again. No way to stop this step. 2.) We have added a "mining" that makes user creation expensive. It will take a long time to became a user making this attack at least quite a bit harder to do. 3.) I don't believe users can declare CAs "invalid" They can simply detrust them and choose to use a different one. 4.) With the requirement of CAs, as well as the mining of user IDs, Sybil attacks shouldn't be possible in this way. They might be possible if you spend a very long time mining many user IDs and becoming a trusted CA, but these things at least hinder the process a bit making it harder.

While I see that there are some potential areas of vulnerability, I don't see how anyone could effectively take advantage of them. If you see another attack method that would bypass these systems, then by all means, let us know and we can get to changing things around, but as I see it, your suggested ones shouldn't be possible.

Dr-Syn commented 10 years ago

@jcoffland The latency would be the time between the CA de-trusting the rogue CA and the next time the client connected to the CA.

Given the architecture of the network being discussed, that time would be "1 connection" or less.

If you can come up with an attack that will cause measurable harm before the next connection of the client to the CA, I'll be reasonably impressed.

Collusion of CAs is a difficult attack to defend against, yes. This is why it's important to make the effort to register as a CA as high as possible.

De-trusting a CA is a public act; it's announcing to the network that you will not accept connections from that CA nor anyone associated with that CA--so in the sudden subversion attack you've mentioned, the sudden de-trusting of all the CAs by a single or small-fraction of CAs will be, on the whole, not very relevant to the network. The users on the traitor CAs will be inconvenienced, to be sure, but that's about the only affect.

See, it's to be expected that there will be a significant number of CAs that will detrust all but a few others: that's how organizations can set up private grids.

Dr-Syn commented 10 years ago

@cbbcbail

Correction: CAs can blacklist other CAs and it will propagate to the blacklisting CA's users. Users cannot propagate in reverse.

cbbcbail commented 10 years ago

@Dr-Syn Ok, I must be wrong about something. So you're saying that a Certificate Authority can "blacklist" another CA. What exactly would this mean and what effect would it be having. As far as I'm aware, CAs don have to connect with each other directly.

Dr-Syn commented 10 years ago

@cbbcbail

CAs would be trustable by means of their participation in the blockchain. If a CA did not, for whatever reason, wish to accept connections from another one's users and/or nodes (due to bad behavior, etc.) then they would remove trust on their end.

Since all connections are encrypted--this is mandatory--the only way in which a connection can be made is through negotiation with valid certificates.

If the CA that you use is not trusting the other CA, then that means the certificates on the other end are invalid to you. Therefore, the connection cannot be made.

This revocation of trust would be communicated to the users when the user goes to make a connection and verifies the proffered certificates with their CA. The connection fails at that point, because the trust has been revoked by the CA.

The user has the option not to trust certain CAs. This is for their own information, and will not have any effect on other users or CAs.

jcoffland commented 10 years ago

@Dr-Syn, you said "The latency would be the time between the CA de-trusting the rogue CA and the next time the client connected to the CA." The latency is much longer. You are not counting the most important time from when the CA(s) decide(s) to act maliciously to the point when it is first detected.

Both my suggested attacks are still valid. I will reiterate and refine my attacks.

Attack 1: 1) Create a CA, gain users by being honest and cheap. 2) Create a large number of users. 3) Perform sibyl attacks using these users.

Attack 2: 1) Create several CAs and build up their reputation overtime. Or convince several CAs in good standing to collude. 2) Blacklist some (or all) other CAs.

Defense: a) Blacklist the rogue CAs. How do you decide to distrust the CA? How long does this decision take? Is it automatic? How do you define/detect bad CA behavior? If many of the CAs blacklist each other how do I decided whom to believe? Doesn't this fragment the network?

b) Trusted CAs are costly to create. Yes, but you only need one to create as many users a you like. Costly in terms of time? What's to stop me from creating 100 CAs at once? Costly in terms of money? Is this fair? Costly in terms of CPU time? Costly does not mean impossible.

I argue that the burden is on you to prove your system correct from the ground up. Rigorously. Not the other way around. This still looks like a Frankenstein project with many loose ends.

Dr-Syn commented 10 years ago

Yes, I agree that this project's got too many loose ends. I've been trying to convince people to focus on creating the base protocol and worrying about application-layer stuff later.

Anyway.

@jcoffland Attack 1:

Sibylling off of a subset of users in a CA is expected behavior, and the expected response for a CA is to revoke certifications for those sibyls. If they do not do so, then the affected systems that suffer some form of consequence from the sibyl behavior will de-trust the CA hosting those sibyls.

This is an improvement over standard web traffic, as existing web traffic has no means of detecting sibyls other than IP bans (which, as we all know, are trivial to evade); in this case, the requisite for valid connections with a CA mitigates the speed at which sibyls can be created.

This is a mitigation measure, not a prevention measure.

@jcoffland Attack 2:

Only the users of the colluding CAs will see any effect. They can freely decide to migrate to other CAs. A hypothetical 'reputation' metric may lower, but, to mutilate a proverb, a sinking tide lowers all boats.

Also, de-trusting all but a select group of CAs is expected behavior: if you want to set up a private grid for your organization, that would be how you would maintain the privacy of your grid. This is not a bug.

@jcoffland Issue B:

Costly in the same way that other blockchain-derived items are costly. The founders of this project wanted, essentially, a cryptocoin-mediated content delivery network, so that's what I'm working on building up for 'em.

It's my understanding that the current thinking in the *coin world is to make transactions costly in terms of RAM--so RAM time expended. If there's a better way, let me know.

JaviLib commented 10 years ago

How do you define/detect bad CA behavior?

For example, if the CA tries to create more users than the average.

If many of the CAs blacklist each other how do I decided whom to believe?

In the nodepool there are the statistics of every node.

Doesn't this fragment the network?

No, unless most of the network goes crazy. Just like any other consensus based system like Bitcoin.

b) Trusted CAs are costly to create. Yes, but you only need one to create as many users a you like.

But you are responsible to control those users or otherwise your CA is banned.

Costly in terms of time? What's to stop me from creating 100 CAs at once? Costly in terms of money? Is this fair? Costly in terms of CPU time? Costly does not mean impossible.

But it means difficult. If generating a CA takes 1 day in a 8 core i7 computer, I think it is costly. We can even estimate a 1 week of work. You're going to expend more money creating CAs than the possible gains for doing a Sybil attack that could be mitigated very fast by the network. Also remember that a sibyl attack on bandwidth means you need tons of bandwidth so only botnets infecting millions of users can do so. By the time a botnet starts to operate with only some tens of them, revocation of the originating CA happens. Botnets would not have a chance to do much harm. We can detect simultaneous connections from users sharing the same CA very easily anyway. Plus this is a reputation system...

I argue that the burden is on you to prove your system correct from the ground up. Rigorously. Not the other way around. This still looks like a Frankenstein project with many loose ends.

Well, if the project was finished we were not asking for help. Please understand the open nature of the project. You can help if you want.

Dr-Syn commented 10 years ago

@JavierRSobrino

I'd also note that my CA proposal indicates -continuous- involvement with the pool--so we're talking at least several days' worth of work every single week to maintain certification.

JaviLib commented 10 years ago

Excelent idea.

Eric notifications@github.com escribió:

@JavierRSobrino

I'd also note that my CA proposal indicates -continuous- involvement with the pool--so we're talking at least several days' worth of work every single week to maintain certification.


Reply to this email directly or view it on GitHub: https://github.com/wetube/bitcloud/issues/5#issuecomment-33546747

Enviado desde mi teléfono con K-9 Mail.

cbbcbail commented 10 years ago

"How do you define/detect bad CA behavior? For example, if the CA tries to create more users than the average. If many of the CAs blacklist each other how do I decided whom to believe?"

I think we need to set out to specifically define all behaviors which we will consider malicious and exactly what will be the punishment if they do said action. How many is to many users for a CA to create? What happens when he creates too many? How many CAs is too many for one CA to "blacklist"? What will be done if they try to do this too much?

These things, can help stop some of the problems that @jcoffland has mentioned.

Aside from the innate security found in the way the network functions, we need to define what behaviors are acceptable and what behaviors are not and what to do about them.

Dr-Syn commented 10 years ago

@cbbcbail You don't need to define those behaviors; those will be up to the individual grids/CAs/users.

Think of it as the 'invisible hand of the market' denying services to bad actors--if you act in such a way that people start de-trusting you or your CA, then you'll find yourself unable to get anything you want.

I realize the temptation is there to codify these behaviors, but seriously, that will do more harm than good, and it will devolve the discussion into endless petty bickering.

Define the protocol, implement the protocol, and let the people who use it decide what they do and do not want to see on it; if they don't like it, then they will dike out that part of the network that does what they don't like.

cbbcbail commented 10 years ago

The temptation is very much there because it is necessary. However, I realize that what I said before may have come across incorrectly from what I meant it to be.

How are users meant to know not to trust a CA? We're talking about expecting users of a service to manage for themselves the entire network. This will not happen. All we need to create is a system in which potentially malicious behavior or behaviors we define as being unacceptable to have an effect on a reputation score for the CA. If a user sees a low score, they will probably not wish to connect with a CA. If a CA does well and gets a good score, they are more likely to get more users. Thus, we have created a natural system for wanting a higher score and one where actions bad for the network are naturally discouraged. The network itself must manage itself naturally, not through it's users doing work to maintain it.

We need to know how the system will work and design it effectively before defining and implementing the protocol. And once again, while we want an open source project, not all users are going to be willing to pitch in and do a lot of work to maintain it, especially once we go mainstream. Think of the number of "Follows" Bitcloud has vs. the number of people who have contributed. We also need to make sure that the network we are creating works BEFORE implementation. Major failures after release will ruin credibility as well as reliability.

Dr-Syn commented 10 years ago

@cbbcbail

The users inherit the trust/untrust list of the CA to which they are directly associated. That is part of the reason why the CAs receive a cut of the transactions that they mediate; they are intended to act on behalf of their users.

Defining behaviors to be rewarded or punished causes system-gaming, where people skirt around the edges of the rules as close as they can manage. This leads to politics. Politics does not belong in the base protocol; keep it in the apps.

Codes of conduct and codes of behavior will necessarily differ based on the intended audience of the particular segments of the grid that wish to enforce them. Disney's grid will necessarily crack down hard on users who swear; pornhub's grid will let that kind of thing slide. Some grids may even encourage sibyl behavior in their interactions. That's the whole point of allowing for separate grids--that the overall protocol can be flexible enough to support the whole thing.

Do not attempt to define codes of behavior. We are concerned with building infrastructure. We are not concerned with how people choose to police themselves.

The only reason why I am concerned with sibyls is that they could pose an existential threat to the whole of bitcloud under certain circumstances--beyond that, honestly, they aren't a concern.

cbbcbail commented 10 years ago

So a user would be permanently damaged if they were using a CA that did something bad?

I think that defining boundaries would be difficult, but unfortunately necessary. We have to avoid ambiguity which is certainly a good sign of a poor system. It would also lead to disputes further down the road. Its merely prolonging the inevitable.

Im not saying codes of behavior in terms of how people are using apps. Im saying codes of behavior in terms of how people are behaving in the network itself. Creating a bunch of fake users on the Bitcloud is very different from posting a bad word to a website. There are things for the websites to take care of, and there are things for the network to take care of and they are very different.

There's no sense in building infrastructure if its exploitable. We have to prevent that from happening.

That is why we all care about Sybils right now and what we are currently talking about. Nothing to do with Disney or Pornhub. We are talking about building BitCloud in a way so that it will not crash and burn immediately upon launching.

Dr-Syn commented 10 years ago

@cbbcbail - So a user would be permanently damaged if they were using a CA that did something bad?

No. Users will be able to create identities across multiple CAs. If one misbehaves on them, they have the freedom to choose another.

Creating codes of behavior just means that you'll force administrators to sit around policing people all day. This inevitably leads to abuses of power, cliquism, etc.--none of these things are desirable results, and I for one will not be associated with such fiddle-faddle and rot.

It is better to design the system such that those who do behave badly are either marginalized or, if they make the effort to behave badly, that effort creates more value for the network than handling their behavior removes.

Your concerns are largely founded in a lack of understanding of what is being attempted. Attempting to apply social controls to a technical enterprise is wrongheaded.

jcoffland commented 10 years ago

@cbbcbail "We also need to make sure that the network we are creating works BEFORE implementation. Major failures after release will ruin credibility as well as reliability."

This to me is an important point. Simulating the network would be a great way to do validate it at a relatively low cost. If there were a Website where you could setup simulations and try out different rules and attacks then it would be much easier to discuss and solve problems. Something similar has been done to demonstrate the viability of attacks against BitCoin. See: http://ebfull.github.io/ & https://bitcointalk.org/index.php?topic=326559.0

@Dr-Syn, you could plow ahead and implement the unregulated (i.e. without defined blacklisting rules) network with CAs you are proposing but I think there is a big risk that it wont work. There is a high likelihood that one or more attacks are possible. It sure would suck to send all that time developing and have it fall flat. However, if 100 guys like you do this, it stands to reason that a working system will emerge eventually. Seems wasteful though.

Dr-Syn commented 10 years ago

@jcoffland Network simulators exist: http://www.isi.edu/nsnam/ns/

Blacklists at the CA are more appropriate for discussing in a discussion about the workings of CAs.

Since this thread has gone far enough off topic--the original issue of 'critical vulnerability to sibyl attacks' has been addressed and mitigated as well as can be discussed without an actual implementation--I'm going to close this issue.

jcoffland commented 10 years ago

The sibyl attack issue is far from closed, IMO.

larspensjo commented 10 years ago

Just a minor detail, but the correct name is "sybil", not "sibyl", isn't it? On Jan 29, 2014 6:13 AM, "Joseph Coffland" notifications@github.com wrote:

The sibyl attack issue is far from closed, IMO.

Reply to this email directly or view it on GitHubhttps://github.com/wetube/bitcloud/issues/5#issuecomment-33557228 .