CPChain / chain

Mirror of https://bitbucket.org/cpchain/chain
GNU General Public License v3.0
51 stars 10 forks source link

Default TNodes impose security risk #88

Closed siebeneicher closed 4 years ago

siebeneicher commented 5 years ago

From the AMA:

Q: Why are CPChain nodes operating as default proposers still necessary? There will soon be over 100 RNodes operational and a majority runs flawlessly. Most impeached blocks I experienced happened during cancellation of an RNode before an upgrade or a reboot. A: The motivation of inserting default proposers is to circumvent the worst case, where all community nodes are faulty and chain can only reply on CPChain team nodes. This case is definitely very extreme. But the existence of default proposers guarantees the throughput of the chain under the worst-case scenario. I understand that the community is concerning the loss of rewards. CPChain team will requite the tokens held by default proposers to the community. The detailed method will be unveiled later.


I would like to object what actually is the worst case. The current reasoning from the team to keep 4 default proposer is, that community owned rnodes are not running all at once - for whatever reason - and the team rnodes are still running. How realistic is this scenario that all community are not working anymore and the team is working?

  1. lets assume its a DDOS attack. Me, as a fictional attacker to bring the chain down would first collect all IPs of all nodes, community and team. Next hammer requests to each rnode targeting a port with a weak application, to bring the machine down or at least make it so busy, that it could not perform chain work anymore. How would team owned nodes be different than community nodes? Are team nodes not vulnerable to such an attack? Are they better prepared for such attack? Possible, but never guaranteed. As an attacker, actually I would target the 4 default proposer nodes, because they are very known, so they actually impose to be attacked much more than community nodes. When this attack would be successful, any node could go down. Default proposers would not improve the throughput, it could be quite the opposite, because these nodes are preferred targets of attacks.

  2. a fictional internet issue: lets assume shanghai is in a cyber-war and there is an internet outage in the datacenter where all the team nodes and validators are running. decentralization should actually solve that issue. Thats why its a blockchain. So the problem here is, that only community nodes would be alive, no default proposers and eventually no validators as well. A clear blockchain killer for CPCchain.

  3. during an election term we have 12 rnodes, 4 defaults and 8 community. 8 community nodes stop suddenly working, but the 4 defaults still work? which I find unrealistic, see point 1 or 2. Then the defaults would probably do the work for the 8 community nodes as well. Under which concrete circumstances this could happen? One case which I find totally valid and realistic is a major update of the node software. Lets say the team releases a new version 0.5.X which is not backwards-compatible. Team nodes can be updated by the team in realtime and older community might not participate in the election campaign due to therir old version. That case is indeed valid, BUT should definetly be solved differently. Whenever the team wants to release a new version, there should be either an automatic update mechanism in place, which automatically updates a node, no matter if team or community node OR as a first-shot solution the release should be planned and communicated to the community a week ahead, so everybody can prepare to update their in node in time.

My conclusion

All nodes are vulnerable, not just the community nodes. Default nodes are preferred targets of attackers. Centralization of team nodes and validators within a datacenter or city or region or country introduces an extra vulnerability, which the community do not have, because they are widely spread (decentralized). Default proposer might increase stability of the chain only in rare scenarios, which are IMO not even the most relevant, thus introducing even easy to know targets for attackers. Default proposers are more harmful to security.

Best wishes and thanks for the great work you put in.

ghost commented 5 years ago
  1. Currently it is a known fact that users are running off home networks and probably other low-quality hosting services that offer no DDoS protection. Team nodes are likely on enterprise level servers distributed across the globe. The chance that you'll be able to take down a community node through a DDoS attack is far higher than a team node.

  2. Again team nodes can be distributed to different enterprise level data centers across that globe. And do not necessarily need to be located in Shanghai.

  3. Fact of the matter is community nodes will never be as secure and issue free as a team run node. Due to the amount of technical competency and resources from the team to be able to resolve issues quickly.

My conclusion Although all nodes have vulnerabilities the safest option currently is to allow the team to continue running their default proposer. Until conditions to enforce increased hardware and network checks on the chain are in place.

siebeneicher commented 5 years ago

Thank you @shreder1 for your thoughts on that matter.

I will address them and we see where we are:

Currently it is a known fact that users are running off home networks and probably other low-quality hosting services that offer no DDoS protection. Team nodes are likely on enterprise level servers distributed across the globe. The chance that you'll be able to take down a community node through a DDoS attack is far higher than a team node.

Where do you have your facts from? I assume most, and I mean a significant number (over 90%), of rnodes are running on modern cloud servers. You yourself are a big promoter of AWS, others have promoted Vultr. I personally use IONOS from europe. AWS, Vultr and IONOS are supporting DDoS protection out of the box.

Again team nodes can be distributed to different enterprise level data centers across that globe. And do not necessarily need to be located in Shanghai.

That is off course possible, and ideal, but not practical and I am sure its not the case for the CPChain nodes nor validators. That is why we have blockchain, its a distrubuted network of nodes, which can be operated by any person or organisation. It is widespread, simply because rnode holders have difference location and preferecens. That is actually my point of argument, a single organisation is much better target, than almost hundred of individuals.

Fact of the matter is community nodes will never be as secure and issue free as a team run node. Due to the amount of technical competency and resources from the team to be able to resolve issues quickly.

"Matter of fact" we have 5x more individual nodes than company nodes running. If a social engineered attack happens on a single organisation, like cpchain, and one server could be compromised, its likely that other team nodes can fall as well. How hard it is to attack each individual node/owner for the 90 rnodes running compared to a single organisation, which weak points are its members. Each person having emails and social activitiy (like blockchain events) are actually much better known than for all the anonymous rnode holders.

The argument, that team nodes are more secure, is vague and misleading. Many furtune 500 companies have been hacked. They present a valuable target, worth to spent time on.

Team nodes are potentially better protected, but they are not 100% secure. My point is: the chance of a successfull atackk on the company and the team nodes is higher than attacking all individual nodes. The damage to the chain if a couple of individual community nodes is compromised is much less of significance than of the team nodes are compromised.