CryptoConsortium / CCSS

The CryptoCurrency Security Standard
https://cryptoconsortium.github.io/CCSS/
138 stars 79 forks source link

No discussion of full nodes #15

Open petertodd opened 9 years ago

petertodd commented 9 years ago

Without a full node you're trusting miners to verify for you, which means that you can be fooled into accepting invalid payments by someone with access to hashing power. This is not hard! Pool security is quite poor, and even worse, the security of the connections between hasher to pool is often poor and hijackable.

The standard should explicitly require transaction validation infrastructure to be using full nodes to determine what the current blockchain is at all levels. (remember the full nodes can use pruning) Secondly, the security of those full nodes is important, and should be held to nearly the same standards as other transaction validation infrastructure. For instance, a full node running on a VPS has similar risks to running an exchange on a VPS, in that if the exchange runs on a physically controlled server but the full node runs on a VPS, hackng the VPS can still cause the server to accept invalid transactions as valid.

Abstrct commented 9 years ago

Hey Peter,

Thanks for the suggestion, I think this addition to the standard makes a huge amount of sense.

Is there somewhere in the standard that this fits already or do we need a new aspect? My initial review makes me think that it will need it's own section, likely under operations.

Perhaps: 2.05 Transaction Validation

In addition to describing proper full node requirements, we could also use this section to prescribe best practices for unconfirmed transactions and dealing with orphaned transactions which once validated.

jlopp commented 9 years ago

+1

With the proliferation of third party blockchain data APIs, it appears many companies are trusting them to verify transaction data. Smaller companies may not have the resources to run their own node + blockchain indexing infrastructure, thus it would be helpful if these trusted APIs conform to a standard set by the consortium.

petertodd commented 9 years ago

@Abstrct I think that makes a lot of sense. Like I say, your full node is part of how transactions are validated as much as any other code, so it needs to be held to the same standards. The one nuance is that because full nodes deal with public information is may be OK to cross-check multiple less secure full nodes against each other with the assumption that not all of those nodes will be compromised at once - harder to do with customer information that where you need privacy security as well.

With regard to unconfirmed transactions, AKA zeroconf, the issue from a certification perspective is if the losses due to double-spends can threaten the financial viability of the company. The hard part, is there is no way to do a good risk assessment. We also should consider the ecosystem-wide effects - the way people are preventing zeroconf losses has serious impacts on decentralization, e.g. with the sybil attacks some companies use against the P2P network to detect double-spends, and the proposals by some to penalize miners who mine blocks with perceived double-spends.

@jlopp Well, to be clear, the standard has to treat trusting others to do transaction verification as a similar risk as trusting others to hold private keys. Obviously in some cases that is acceptable, but the protections are legal tools, not technical. Now that said there are verification approaches possible, e.g. outsourcing the maintenance of tx info databases to third party providers, while checking the results returned locally against a pruned node.

jlopp commented 9 years ago

@petertodd Certainly, it would be preferable for entities to not trust third parties (or trust but verify,) but if they are going to trust the third parties then it would be preferable that the third parties adhere to a security standard.

Along these same lines of transaction validation requirements, a standard should mention transaction malleability and how to guard against it.

Abstrct commented 9 years ago

@petertodd I agreed that suggesting the use of fake (for lack of a better term) nodes would be a detriment to the health of the network and not something I would support.

Assuming that third parties could offer their blockchain details in a standard way, it would drastically simplify the ability to write a library/application that pulls data from a number of sources, compares and acts accordingly based on the results.

Since we would be pulling some multiple sources, the hope would be that if one is nefarious, this setup wouldn't be as negatively effected compared to that of a system pulling only from that source. With that being said, I still feel like a third party voluntarily following a standard, and even publishing audits based on that, would certainly be higher on my list of potential vendors than one who has not. So I agree with @jlopp that a standard has its place.

Now, as for maintaining a blockchain node securely, what are some of the specifics one would prescribe?

thedoctor commented 9 years ago

Chiming in: At gem we pull blockchain data from multiple sources including other APIs and our own nodes and to @jlopp's point, it would have been much easier to build the system to process all of that incoming data if there were a normalized format for services exposing blockchain data – as it is we have obnoxious-to-maintain adapters for each external data source – but I don't think it's fair to impose a requirement on companies to use a particular scheme or encoding since the choice of one over another doesn't directly affect the security of the service exposing that data.

That said, it may be appropriate to suggest a standard in this context (although I think that might be more informational BIP material than anything else).

@Abstrct @petertodd Strongly agree that a TX Validation section is appropriate, but I would argue that requiring certified organizations to validate TXs on their own full node is:

  1. as @jlopp noted potentially infeasible for fledgling startups
  2. not necessarily more secure than trusting a third party service with validation (I'm thinking of sybil/ddos attacks to which larger organizations with distributed node clusters are less vulnerable)

What about a level breakdown something like this:

Level 3 requires at least a small degree of reliance on external data sources which some people may find offensive, but it mitigates the situation where an organization's node infrastructure is targeted for attack.

It also may be reasonable at one or more of the levels to require only probabilistic external verification where some percentage of transactions are validated by external sources.

Thoughts?

luke-jr commented 9 years ago

It's no easier to target your own node, than it is to identify and target external nodes you rely on. Would it be more acceptable to simply require a full node run by an organisation with which there is an explicit personal contract to provide the service with some kind of SLA?

jlopp commented 9 years ago

@thedoctor That's a good start, though I would propose that at the highest level of security, you should not be querying third parties for transaction validation because it's a privacy issue, much like how using bloom filters with SPV is a privacy issue. Instead I would suggest using one or more highly connected full nodes that are run on secure machines as @petertodd suggested. If you wanted to add in additional "third party data sources" then I'd recommend doing so by peering directly with node(s) run by the certified organization, since this would retain the trustlessness inherent to full nodes.

thedoctor commented 9 years ago

@jlopp

I'd recommend doing so by peering directly with node(s) run by the certified organization, since this would retain the trustlessness inherent to full nodes.

I'm not sure I know what you mean. How is relying on a specific organization's node practically distinct from relying on a blockchain data they provide through another protocol apart from the marginal privacy offered by bloom filters?
@luke-jr

It's no easier to target your own node, than it is to identify and target external nodes you rely on

Target for what? I don't know what point you're addressing here.

...contract to provide the service with some kind of SLA?

This would probably be a good policy for most companies, but does requiring legal constructs fall within the scope of a security advisory? (Although a requirement to use nodes run by a certified org arguably does the same thing)

I have the same question about the privacy concerns. I agree with the points raised and acknowledge that leaking private data often constitutes a security risk, but I'm not sure that it's reasonable to require companies to not use procedures that fail to achieve [some level of] privacy, even at the highest levels, as some applications may not care about privacy – or even choose to eschew it deliberately and those use cases shouldn't be excluded from receiving a security certification.

As I see it, while failing to run your own full node is a security risk, relying solely your own node infrastructure if it is not sufficiently large, distributed or up-to-date presents an equal or greater risk.

That said, is there a way we can encourage the use of third party data sources for corroboration such that privacy risk is minimized? It may be difficult to trade privacy for bandwidth as in the SPV protocol depending on a service's pricing model.

jlopp commented 9 years ago

@thedoctor if you're peering with their node using the Bitcoin protocol, you're not relying upon it. You're instead ensuring that you have a connection to the network that is highly unlikely to be a sybil node.

dexX7 commented 9 years ago

As I see it, while failing to run your own full node is a security risk, relying solely your own node infrastructure if it is not sufficiently large, distributed or up-to-date presents an equal or greater risk.

It would be great, to define these risks in greater detail.

Privacy concerns

Exposing nodes, or infrastructure in generel, may have security implications for an organization due to a larger attack surface, though I don't see a link between this topic and privacy going far beyond that point.

... potentially infeasible for fledgling startups

Please reconsider this perspective. I do not disagree that relying on third parties can be sufficient, but running one or more fully verifying nodes shouldn't be dismissed so lightly. If an organization requires a correct view of the blockchain (exchanges, payment processors, ...), then arguing with costs seems misplaced.

Note that there are several aspects to consider in this context:

The impact of block reorganizations may easily be overlooked, but seems incredibly important to me, given that it can potentially affect and reserve a chain of events and processes.

Using data from unauthenticated third parties should be penalized by all means, and relying on trusted/reputable third parties shouldn't score points for anything but level 1, especially when considering the level 2+ requirements of other areas, or the arguably low costs of maintaining nodes to begin with.

petertodd commented 9 years ago

Sounds like I should do up a in-depth analysis of this; a lot of the above is based on mistaken assumptions, and it's also not based on actual attack scenarios.

re: "fledgling startups" - running a full node costs a few dollars a month; the vast majority of the cost in reality will be employee time just setting it up. Again, outsourcing it is not much different than trusting a third party with your private keys unless careful steps are taken.

I'll see what I can do re: finding funding for this - might be able to get it funded via existing clients, or perhaps through some crowd fund.

mperklin commented 9 years ago

@petertodd, I've taken a step back from participating in this discussion because it's taken a while to consider everything.

You make an excellent point that the security of any information system that deals with a cryptocurrency depends on accurate transaction / chain information so it can make its decisions appropriately. Any source of malicious, malformed, or inaccurate blockchain information represents a risk to the information system.

While that point is very easy to agree with, it's the next point that becomes difficult.

On the one hand, relying on a 3rd party service (like Gem) creates a trust relationship that can be abused. Gem can serve you malicious transactions that prompt your system to take steps it normally wouldn't have. "So you shouldn't trust Gem"

On the other hand, you decide to run your own full node and pull your transaction information from it instead. That node could still provide malicious transactions, but even worse, now YOU are responsible for maintaining the node, ensuring it has a healthy connection count to the rest of the network, and ensuring it's up-to-date as the codebase/protocol matures. What if you suck at IT?

This seems like a tradeoff. A company like Gem has dedicated engineers working hard to ensure their blockchain information is accurate, timely, up-to-date. They have a financial incentive to provide accurate data, or else their reputation will crumble and they will lose their clients. There are clear cases where there are FEWER risks to trust 3rd party blockchain information than to operate your own full node internally.

Ultimately it comes down to every business's own use cases, risk tolerances, and system architecture. Some businesses would favour using internal nodes for their systems at the expense of maintaining them properly, whereas other businesses would favour outsourcing the retrieval of blockchain information to a trusted 3rd party.

@luke-jr made a great point about SLAs. Rather than having the CCSS mandate technical requirements (like maintaining your own node), maybe it's more effective to mandate non-technical requirements like having an SLA with your blockchain provider that ensures delivery of accurate information.

Getting back on point... if a new aspect were to be added to the CCSS for "Blockchain Information Source" I don't see a justifiable way to mandate internal or external blockchain sources since both have valid use cases and both can be secure. Similarly, I can't think of a set of metrics that would allow you to grade the security offered by whichever blockchain information source is serving the information.

What would the metrics be?

Looking at these metrics, I can't see a clear way to map them to increasing levels of security.

Maybe I'm missing some metrics... what other metrics can we use to grade the security of a system's blockchain information source?

petertodd commented 9 years ago

@mperklin Seems like we're in agreement really - someone trusted needs to be running a full node, and if it's not you, then you're in a situation not unlike having someone trusted hold your bitcoins for you. Equally, trusting someone else to hold your Bitcoins can be made low-risk for a business with the appropriate SLA's, insurance, etc. and in many cases will have lower risks than holding them yourself.

As for the mechanics of actually running a node... keep in mind that Bitcoin's PoW is itself inherently a sybil protection, so all this stuff about number of connections to peers etc. isn't a big deal. Equally, the Bticoin P2P protocol is entirely unauthenticated, modulo Tor onion support, so it's not easy to guarantee anything about who you are connected too.

mperklin commented 9 years ago

@petertodd, despite being in full agreement that a "secure system's" blockchain information needs to be accurately served from a trusted node, I'm starting to think there's no way we can add this as a new aspect within CCSS.

I fully agree that metrics like "number of connected peers" are not good metrics to measure the 'trustability' of a node's blockchain data.

Unless we can think of metrics, we'll have to drop this suggestion. I've rattled the idea around in my brain over the last week and haven't been able to identify anything we can measure about a system that proves the "trustability" of their blockchain nodes.

Simply saying "You must use a trusted node!" obviously isn't enough, and forcing people to use either their own node OR a well-built 3rd-party blockchain service under an SLA isn't a good idea because we all agree that both configurations are valid.

We'll leave this issue unclosed indefinitely because your suggestion is a good one... it's just not one we can act upon at the moment. Maybe we'll have a eureka moment sometime in the future.

thedoctor commented 9 years ago

FWIW, it still seems to me that the greatest (or at least, most addressable) risk posed to security (privacy aside) here stems from an org trusting a single data source.

I agree with @mperklin that analyzing the trustworthiness of any arbitrary source is both difficult and potentially beyond the scope of this specification, but ignoring that goal there is still a big win from regular sanity-checking in that the org will at least be probabilistically alerted to potential attacks against or (potentially costly bugs in) their data provider or node.

I'm convinced that requiring an organization run a node and verify some amount of their blockchain data against it is reasonable at some level, but there is a potentially significant cost here that isn't bandwidth or electricity – it's the time it takes someone on an organization's team to learn how to maintain a node.

Requiring every org that wants this certification to have someone to fill that role may or may not be a good idea? If level 1 is targeting companies that dgaf about blockchains and just want to use btc and get a "don't worry, customer, we're not COMPLETELY irresponsible" badge to put on their app, then maybe we should back off a bit.

As for mapping to increasing levels of security: L1: 2+ data sources L2: org-run full node L3: run node on-prem instead of in a hosted environment??

mperklin commented 9 years ago

@thedoctor I'm not a fan of your L1-3 map. Your L3 requires a node to be run on-premises, but I think we both agree it's a perfectly valid setup to consume a 3rd party's data and outsource the maintenance of the node to them.

Your L1 may be on to something, though I don't know if it's helpful or a hinderance.

Let's say I choose to use Gem for my blockchain data, and I have my own node operating internally as well. Every time I get new data to act on from Gem, I validate that my node also knows about it before I start processing my business logic.

This seems like a great way to ensure you're only performing your business's logic on "true" blockchain information... but there are problems.

If you're using one node (internal or external) and you're processing your business logic whenever it gives you new information, a node outage is easy to deal with. You're not going to process any business logic until it comes back up, and when it does, you pick up where you left off.

But when you have 2 nodes that you use to validate each others' blockchain information, things become a LOT messier. Now you have some blockchain data that you can't confirm... so do you process it and have your app remember to validate it later when the other node comes back online? Or do you ignore it and choose to process ALL of it when both nodes resume operation and are once-again in agreement?

These types of decisions make the consensus-confirmation logic a LOT more difficult. From a KISS perspective, now you're doing a LOT more work before you even get to the actual business logic your app needs to actually do.

This brings us back to where we were after @petertodd suggested the issue in the first place. While it sounds great at first (Validate business logic! Use 2 nodes to cross-validate each other!) when you start specing out an actual implementation it seems to reveal that this can't actually be done... or if it can, there's no way to measure that it was done "more correctly" than not doing it in the first place.

thedoctor commented 9 years ago

Those are valid problems iff you're validating transactions against multiple sources when they are first seen and evaluating them only once, but that's not particularly useful behavior for the reasons you outlined.

I was imagining behavior where – especially (or even only) for high-value or high-risk transactions – if I don't see instantaneous agreement between multiple data sources, I wait for them to agree before proceeding.

While we can't guarantee that this process will result in a 'more correct' response to any specific conflict incident, we can be much more confident that the majority of cases are being handled properly. Plus we get alerted to (and can choose how we handle) exceptional circumstances.

But this is a lot of work. Maybe: L1: Your blockchain data provider satisfies L2 requirements L2: 2+ data sources (and maybe something about alerting when irreconcilable discrepancies are found)

CodeShark commented 9 years ago

Even relying on multiple data sources can be problematic, as the fork of July 4th (BIP66 softfork) illustrated. In this instance, it wasn't a deliberate, malicious attack that caused a problem - rather, it was widespread use of implementations that removed validation checks to reduce latency or resource requirements, along with the use of outdated software unaware of the new rule change. This caused many block explorers and online wallets to all accept the same invalid chain.

There's a serious risk of multiple sources all using buggy implementations...or relying on older software that is unaware of more recent rule changes. So it's not only a matter of checking multiple sources against one another...but also checking multiple implementations against one another.

These risks are increased whenever there are changes in the consensus rules (i.e. softforks). Perhaps we should consider some kind of alert system to warn blockchain data providers of impending changes - and when discrepancies between widely used implementations are detected.

CodeShark commented 9 years ago

@mperklin: I disagree that it's too messy and can't be done. In fact, I think it very much SHOULD be done...if two instances disagree, it's probably a good time to temporarily halt doing what you're doing and figure out what went wrong. This isn't a hypothetical issue - it's already happened a couple times in the history of Bitcoin.

thedoctor commented 9 years ago

@CodeShark Good point, re: implementations – that becomes tricky compared to my proposal, as few blockchain data providers expose information about what implementations they're using, and they could even be using proprietary, closed-source implementations that speak the bitcoin protocol. There's also no guarantee which implementations might be vulnerable to new events like the bip66 fork, so the task then becomes validating data against an exhaustive or probabilistically comprehensive set of node implementations.

I have ideas about how to do that, even without much of a performance hit, but it's a non-trivial feat of engineering and suggesting that be a requirement for every company seems ludicrous.

CodeShark commented 9 years ago

@thedoctor Not every company needs to do this - it only takes one person to discover a discrepancy and report it...then everyone else can verify that the discrepancy has actually occurred. So it would be sufficient to have a few groups doing this along with some alert network. Assuming the groups aren't all in collusion to withhold information, this could be made practical.

The p2p protocol has an alert message...but many believe it is not a good feature as it introduces strong centralization.

thedoctor commented 9 years ago

@CodeShark it would be dope if that existed, and C4 might be a good candidate for one of the organizations that monitor implementation discrepancies, but until such a monitoring network exists, there's not much we can do to this specification. If/once it does exist, all certified entities should definitely be required to subscribe to the alerts and be prepared to switch providers or implementations should theirs be affected.

CodeShark commented 9 years ago

@thedoctor Also, with regards to things like BIP66, it was possible to know exactly the moment it locked in - and several of us were watching the network at the time...which is how the discrepancy was quickly discovered. Not all forks are caused by known transition points (i.e. the March 11th fork was caused by a database library bug that was completely unforeseen), but scheduled soft forks should definitely be closely monitored.

CodeShark commented 9 years ago

This issue is certainly not trivial - but it is very important if not urgent to address, IMHO. We'll be probably seeing at least several more soft forks this year and the next (if not a hard fork which threatens to substantially increases validation costs perpetually, greatly exacerbating this problem). So we need solutions now.

CodeShark commented 9 years ago

To elaborate further on the fork handling issue, we should distinguish between forks caused by network partitions (i.e. miners mining atop two different chains but with the same rules), forks caused by changes in consensus rules (i.e. software unaware of soft forks), and forks caused by unforeseen differences in behavior between two implementations.

1) Forks caused by network partitions

These are just regular reorgs, typically. The main attack vectors are things like block withholding attacks. For the sake of measuring the irreversibility of a transaction, what really matters is how much work is required to reverse it. Since these situations are usually short-lived, easiest is to just institute a confirmation policy that makes it highly unlikely such a long fork exists. These are common.

2) Forks caused by changes in consensus rules

These forks are scheduled into the bitcoin nodes themselves. Until now we've only really had a mechanism for soft forks, not hard forks. Typically, miners "vote" on the rule changes and the change is locked in once a supermajority is reached. Even if we cannot know in advance exactly when the change will occur, we can still know the moment it does. This happened around July 4th 2015. The issue would have been contained to a small number of miners had it not been the case that large mining pools were voting for the rule change but were not actually validating.

As far as how we'll handle hard forks, all bets are off at this point. In case of such an event, safest would be to observe the network closely for a while after the transition to ensure the new rules overwhelmingly dominate before treating any new transactions as irreversible.

3) Forks caused by unforeseen differences in behavior

These forks are particularly tricky to handle. Safest is to completely halt making transactions upon detection until the issue is investigated and resolved. This happened on March 11th 2013. It was caused by a software bug. What makes bugs in some ways even more dangerous than deliberate attacks is that they lie outside the economic security model assuming miners behave out of self interest.

CodeShark commented 9 years ago

Even if we don't yet have a good fork detection solution, it would be wise to incorporate procedures for handling these situations into the standard assuming the operators are aware of the situation.

kanzure commented 9 years ago

There are also fork issues to consider such as "my third-party bitcoin node operator has chosen a different branch of a reorg with different rules or that severely hinders my ability to reconcile my outstanding obligations through bitcoin".

Similarly, you should tend to avoid "shared hosting" bitcoin node operation. Your trusted third-party should ideally be running one bitcoin node per customer, so that in the event of a fork they do not have to enforce one certain fork on all customers, which might be problematic for your business. Using a one bitcoin node per customer model, you could be reasonably confident that your trusted third-party will continue to operate the bitcoin node with the rules you want.

CodeShark commented 9 years ago

@kanzure I've always been a strong advocate of running your own full validation nodes if you're serious about security...but alas, it seems most people find it too hard to do this. Even software developers at Bitcoin companies are finding it hard to do this. Note: I still have not given up on convincing people, but it's an uphill battle.

While I fully sympathize with the tech elitist view and understand the desire to gloat about how you know better than everyone else, unfortunately, that doesn't seem to change the reality of the situation. Even miners aren't properly validating...and the worst part is that the strategy can actually be rational!

The only thing that will ultimately fix this problem is either making secure validation much cheaper and easier to integrate into software applications than it currently is...or barring that, having a secure, trustless way to outsource this work, which we currently do not have. Note: for certain use cases, we really have no decent alternatives here - i.e. mobile devices with intermittent, restricted network connections.

For now I would suggest avoiding stark black & white approaches and instead look for ways to start where we are and reverse the trend.

Running a full node should be a requirement for level three at the very least, though.

kanzure commented 9 years ago

While I fully sympathize with the tech elitist view and understand the desire to gloat about how you know better than everyone else

That's a very bizarre interpretation of my previous message- or any other message in this issue thread. So far I haven't seen anything about "tech elitist views" or gloating but instead problems regarding trusted nodes and contract terms when using third-party API providers. I think that if you pick a third-party that does not use shared nodes then you might be safe, depending on whether they are willing to do support for each of the different nodes they are running for all of their customers.

(Proposals about "tech elitist views" should be submitted as separate issues if you want inclusion about that.....?)

CodeShark commented 9 years ago

@kanzure I apologize if I misread your seemingly snarky sarcasm. Perhaps I was off the mark.

It sounded upon first reading it that you were making fun of people using third party blockchain data providers...and that we might as well let each customer pick their own consensus rules arbitrarily. FWIW, had you intended it this way it would have made for a good troll. :p