Closed dantengsky closed 5 years ago
What do you mean by not persistent?
If you mean across node restarts, then no, because nodes cannot restart with the same identity. A re-started node has a new identity and is a new node for all intents and purposes. voted_for is only persistent for the lifetime of the node, by design.
When a CCF service loses f + 1 nodes, you are absolutely right that the only way to resume is to follow the Catastrophic Recovery procedure to create a new network, which will have a new identity, from the ledger of the old service. It is not possible to add new nodes to the old service, because commit requires f + 1 nodes to agree.
I hope this helps, but please let me know if anything is unclear.
Thanks! It helps a lot.
Please bear with my curiosity.
A re-started node has a new identity and is a new node for all intents and purposes. voted_for is only persistent for the lifetime of the node, by design.
Is it a way of eliminating forking attacks?
Coco Framework white paper, page 4, says:
Through its use of TEEs, Coco creates a network of trusted nodes that reduces the problem from Byzantine fault tolerance to crash fault tolerance. This simplifies consensus and thus improves transaction speed and latency—all without compromising security or assuming trust.
May I ask why CCF choose PBFT instead of the reduction, is it because that monotonic_counter and trusted_time are not "practical" enough?
Not allowing identities to be reused is not a way to eliminate forking attacks. Forking attacks are impossible in CCF because a consensus protocol (Raft currently) is used to decide what transactions are committed at which versions.
If a node identity was re-used, it would potentially mean that it had left the TEE (whereas in the current implementation, node identity is created inside the enclave and disappears with TEE shutdown), and there would be scenarios where an identity wants to join multiple times etc.
I'm not sure I understand your second point. The current CCF implementation uses Raft only, although there is work in progress to add PBFT as well.
Thanks a lot for your reply and my apologies for the unclear questions.
By "forking attack" I mean that the
"adversary leverages two concurrently running enclave instances." -- ROTE: Rollback Protection for Trusted Execution
To the best of my knowledge, a naive (which CCF is obviously not) raft + tee implementation might not be byzantine fault tolerant, when under rollback/forking or some other kinds of attacks.
Although CCF's TR claims that, by using TEE and RAFT (and other enhancements), CCF is still crash fault-tolerant (if I understand correctly); while I am reading the technical report and digging around CCF's code, I feel strongly that CCF is trying very hard to make the TEE + Raft configuration a "kind of" BFT solution, without using SGX monotonic counter:
Another concern I still have is that, a malicious host might able to attack the service by faking "high frequency" ticks to application enclave, at the mean time, the malicious host also drops inbound messages to the enclave; so that, the raft state machine insides, keeps in candidate state, and voting for next term, and the vote is so fast, such that other honest enclave cannot get the leadership of a term; thus the service losts liveness property. Is this kind of attack feasible in CCF (with raft)?
Can I say that CCF, configured with raft, is a BFT solution, with some/slight weakness as far as liveness property is concerned?
Does the usage of PBFT protocol in CCF mainly focus on scenarios that enclaves are compromised?
Let me know if this is not the proper place to post this.
Many thanks.
CCF with Raft, running in a TEE, is not Byzantine fault tolerant. We are not making that claim. There are absolutely scenarios where an attacker with some control of the network or the hosts, without breaking enclaves, can cause a denial of service.
The reason we are planning to add PBFT as an option is to offer integrity guarantees if some enclaves are compromised, but confidentiality would still be lost under those circumstances.
To answer your questions about attacks, I don't believe the forking attack you describe applies to CCF as it stands because each node generates a unique identity, which never leaves the enclave, and code is attested. Code that wouldn't do this would not be able to join, unless members deliberately allowed it to. If you can think of a mechanism for a forking attack, we are definitely interested!
The second attack you describe affects liveness (again, we make no claims about liveness under attack). It sounds plausible and it may be possible to mitigate it with some changes. It would not affect CCF running with PBFT as its consensus implementation.
Some additional comments:
Yes, our design supports PBFT to prevent advanced attacks against integrity and liveness even if a few enclaves get compromised. These are not claims we are making for the current RAFT-based implementation, where the signed evidence provides support only to detect such attacks and blame them on the compromised enclaves.
Regarding liveness, note that all messages between replicas are authenticated. Hence, CCF using RAFT already resists liveness attacks from a minority of hosts. For example, if the host of the primary plays tricks with local network scheduling, it will simply cause that primary to be replaced by a more responsive replica.
As you suggested, we do not seal replica signing keys, or even attempt to recover crashed replicas. Instead CCF replaces them with new replicas with fresh identities and keys. This provides stronger forward secrecy and integrity. Thus, we do not rely on local counters (which are relatively slow and may not resist hardware failures/attacks). Instead, we rely on the replication protocol to reach consensus on the contents of the ledger. The closest to a monotonic counter we have is the commit index in the ledger.
@fournet
I really appreciate your comments.
Regarding liveness, note that all messages between replicas are authenticated. Hence, CCF using RAFT already resists liveness attacks from a minority of hosts. For example, if the host of the primary plays tricks with local network scheduling, it will simply cause that primary to be replaced by a more responsive replica.
IMHO, liveness attacks might be feasible :
Suppose a service composed of 3 nodes {n0 .. n2} , all nodes are synced(same term, index) at beginning.
Adversary controls a minority {n0}. (Enclaves are not compromised)
n0 is in Follower
state, adversary may modify the code of untrusted-zone, so that AdminMessage::tick
messages are sent to enclave much more frequently, and with large enough elapsed_ms
value to trigger timeouts.
If I get it right, the victim enclave will keep sending RequestVote
messages to peers, and because messages are constructed by the enclave, other peers will treat the RequestVote
messages as legitimate, the honest leader will also transit to Follower
state.
The adversary also drops in-bound messages to the victim enclave, so that victim enclave can not transit to Leader
state, hence no AppendEntries
messages will be sent.
The malicious node keeps being the first RequestVote
message's sender for each new term, the cluster will be effectively shutdown.
Network is still partially synchronous, a majority is still alive, but liveness no longer held.
@achamayou
I am really grateful for your reply.
To answer your questions about attacks, I don't believe the forking attack you describe applies to CCF as it stands because each node generates a unique identity, which never leaves the enclave, and code is attested. Code that wouldn't do this would not be able to join, unless members deliberately allowed it to. If you can think of a mechanism for a forking attack, we are definitely interested!
My main concern is the process of (hot) replacing a node, when a majority is still alive (thus, the whole service need not to be shutdown, if I understood right).
But after a lot of thinking over the last few days, still I could not find a way to attack the node-replacement process.
If I could figure something out later on, I'd like to share with you.
You guys rocks, thanks.
@dantengsky there is no hot replacement, only adding new nodes, and that's a governance change, so it's generally subject to a vote (although one could imagine a constitution that always allows it). Please do continue to share your thoughts on attacks, they are most welcome!
About liveness attack, @fournet may correct me, but the attack you describe sounds possible to me. I think it can be mitigated by throttling vote requests (it is possible to busy wait in the enclave to ensure at least a certain amount of time has elapsed), but it's not very elegant. I will give this more thought, thank you for bringing it up.
@dantengsky after giving this some more thought, we agree that this attack is feasible. What makes it possible fundamentally is the fact that the host can tamper with the duration of the election timeout.
We believe that this can be fixed by not relying on the host to election timeout and implementing an active wait inside the enclave instead, which can have a guaranteed lower bound. I have opened this as #99 for us to fix, thank you for reporting it!
I am closing this issue because I think it has come to a conclusion, but feel free to open more issues if there are other topics you would like to discuss, or to comment on #99 if you have further thoughts on this particular matter.
voted_for
is not persistent. Is it partially implemented or am I missing something?If a CCF service lost f + 1 nodes, is it possible to recovery the service by add new nodes (on the same platforms)? In section IV-D "Adding a Node to a Service", TR says
Does this imply if f + 1 nodes crashed, one should shut down the whole service, and follows the "Catastrophic Recovery" instructions to recovery the service?