Closed NullHypothesis closed 1 year ago
After giving this some more thought, I believe that this is what we need:
Inside the enclave:
On the host:
So the motivation here is to simplify the enclave application by allowing it to use the TUN network device directly like on a normal host?
What does the iptables config control? Wouldn't the application running inside the enclave use the TUN device for all traffic by default if it's the only interface available? Something about making auditable firewall rules regardless of how the host proxies the traffic?
It would be nice to dedicate an additional ip address directly to the enclave tunnel device for applications that don't want to deal with NAT traversal. The host could either route the traffic and use local dhcp/ra to pass the configured address into the enclave, or we might be able to use TAP to bridge to a dedicated (virtual) network device on the host and let the enclave receive its address directly from AWS.
So the motivation here is to simplify the enclave application by allowing it to use the TUN network device directly like on a normal host?
Right. Simplicity is a welcome side effect but the main motivation is flexibility. For now, enclave applications are constrained by our cumbersome SOCKS interface which makes it difficult to support real-time applications like, say, Tor relays or DNS proxies (or resolvers) which cannot easily be patched to support SOCKS. For the research side of things, I intend to build a proof of concept of an enclave-enabled Tor relay which allows Tor clients to verify that their relay is behaving according to the protocol.
Also, if we provide a TUN interface, it will be easier to develop enclave applications in languages other than Go. We could decouple nitriding from the enclave application and have two processes running inside the Docker container: nitriding (which provides the attestation endpoint and takes care of the TUN forwarding) and the enclave application (which is self-contained and doesn't even need to know about nitriding).
Does that make sense?
What does the iptables config control? Wouldn't the application running inside the enclave use the TUN device for all traffic by default if it's the only interface available?
I'm not sure but that may be the case, yes. And yes, we can also use iptables to discard undesired traffic while giving users the ability to verify those rules via remote attestation.
It would be nice to dedicate an additional ip address directly to the enclave tunnel device for applications that don't want to deal with NAT traversal. The host could either route the traffic and use local dhcp/ra to pass the configured address into the enclave, or we might be able to use TAP to bridge to a dedicated (virtual) network device on the host and let the enclave receive its address directly from AWS.
Sounds like a useful improvement, yes.
Here's a summary of what my experiments taught me thus far. The package gvisor-tap-vsock solves the problem raised in this issue: It can create a TAP device inside the enclave and forwards traffic between the TAP device and a proxy application running on the EC2 host. Here's a more detailed explanation. gvisor-tap-vsock is fairly easy to integrate in nitriding and if we end up using it, it would allow us to re-architect the STAR randomness server as follows:
I can think of two downsides:
In my opinion, the trade-off is worth it and I intend to move forward with a PoC PR. Another question worth considering is how we should proceed with nitriding's current single-process architecture. Maintaining two programming models in parallel is too time consuming, which is why I prefer to move forward with an API-breaking version 2.0.0. This would abandon the package-based programming model and turn nitriding into a tool kit. (cc @rillian)
(Also copying @mwittie and @dlm: Let me know if you have any thoughts on the above!)
Below is a summary of the changes that I intend to make. The PR contains an architecture diagram that illustrates how nitriding is going to work.
Replace both viproxy and socksproxy with the gvisor proxy application. This proxy implements a user space TCP stack, port forwarding, and VSOCK translation to allow clients on the Internet to talk to the enclave application.
Replace nitriding's Go API (SetKeyMaterial, KeyMaterial) with an enclave-internal HTTP API that the application uses to talk to nitriding. Nitriding now starts two Web servers: one for its Internet-facing HTTP API and another for enclave-internal IPC.
Using the gvisor-tap-vsock package, this PR makes the Internet transparently available to enclave applications. This is convenient for development but also constitutes a security problem: in case of an enclave compromise, we don't want malicious code to be able to exfiltrate data to arbitrary hosts. We can (and should) configure iptables rules on the EC2 host but enclave users won't be able to verify those rules. We can also add some primitive packet filtering to the in-enclave packet forwarding code. This has the advantage that it's user-verifiable.
What all of this means for nitriding users:
Enclave applications must now be standalone executables because nitriding is no longer a package (it could be but I'm not convinced that it's worth maintaining two separate use cases). Like nitriding, enclave applications must be built reproducibly.
To use nitriding, one has to run the nitriding executable in a Dockerfile. The enclave application does not need to know that it's run inside an enclave unless it makes use of enclave scaling, in which case it needs to talk to nitriding's enclave-internal HTTP API.
@rillian: Does the above sound sane to you? If so, I'm going to make my draft PR ready for review.
Your architecture diagram shows the nitriding and application servers responding separately to their respective requests, but I'm confused how those are routed. Are they on different ports? Does the nitriding server proxy for the main application? Where is TLS termintated? Do they share the certificate?
Below is a list of enclave-internal IPC endpoints that we need.
@rillian: When we discussed this, you were no fan of using HTTP for IPC. I don't consider this a problem (I'd expect most enclave applications to be implemented in Go or Rust, which provide built-in HTTP clients) but I'm open to alternatives. For example, we could do what Tor does and provide a custom, text-based protocol on top of TCP or rely on POSIX signals and/or file system files.
POST /enclave/state
Allows the application to register state (i.e., an arbitrary []byte
slice) that's synced to another enclave if horizontal scaling is used.GET /enclave/state
Allows the application to retrieve previously-set state.GET /enclave/sync
Instructs nitriding to synchronize the above-mentioned state with a remote enclave.POST /enclave/key
Allows the application to register a public key (or something equivalent) whose SHA-256 hash is added to the attestation document. This is useful for applications that don't take advantage of nitriding's reverse proxy (and its TLS termination) and instead choose to handle incoming client requests directly, using the given public key to provide a confidential channel. (This should cover @mwittie's use case.)POST /enclave/ready
Once the application has set up, this instructs nitriding to start forwarding connections.HTTP api for internal rpc is fine. Custom text protocols are notoriously difficult to get right. I like to complain about the cost but for this design it's a reasonable choice.
@NullHypothesis it seems that POST /enclave/key
does cover our use case. How will the users be able to get an attestation over the hash of the registered public key? Will there still be the external GET /attestation
endpoint which now will include the hash of the key in addition to the fingerprint of the TLS certificate?
@mwittie Yes, the idea is that queries to the public GET /enclave/attestation
endpoint will return a document signed by AWS containing the key hash submitted to the private POST /enclave/hash
endpoint.
The only way that we currently provide for an enclave to talk to the outside world is a SOCKS proxy. If an enclave application doesn't already support SOCKS, it can be a pain to add. Instead, we could expose a tun device that automatically forwards all IP packets to the EC2 host. That's more flexible but also more complex and error-prone.
Let's investigate how much work that would be, and play with a PoC. Having a tun device would allow us to run more complex services like a Tor relay inside an enclave. Tor users could then do remote attestation and convince themselves that we are running an unmodified version of the Tor protocol. This matters because Tor relays can actively tag network flows for end-to-end correlation attacks.