FlorianUekermann / rustls-acme

Apache License 2.0
136 stars 27 forks source link

runtime independent refresh loop spawning #17

Closed FlorianUekermann closed 2 years ago

FlorianUekermann commented 2 years ago

I'm using custom wakers in async-ws to achieve runtime independence. Maybe the refresh loop should use tasks awaiting the stream to run using the same mechanism.

FlorianUekermann commented 2 years ago

@joshtriplett , @casey , @soenkehahn I started work on a runtime independent version and could use some user feedback. I'm not concerned about breaking backward compatibility, because I want to do that for a few other reasons anyway. There are basically 3 options:

  1. Implement a TlsListener wrapping TcpListeners (any Stream with Item = Result<impl AsyncRead+AsyncWrite+Unpin>>, which corresponds to tokio::net::TcpListener). This option has some nice properties, because it allows using the task polling the stream for renewals. The only downside I see is that it would require such a stream and transfer ownership of the TcpStream to the TlsStream.

  2. Hijack task polling on acceptor. This remove the need for anything but the acceptor and provides a really clean minimal interface (no need to assume that the user has a stream of streams like in 1). The refresh loop is just hidden in the accept future. A downside would be that refreshes would wake up whatever accept futures are waiting for completion and delay their progress at least a bit to get stuff done. Also the first retrieval and renewals of the cert would get delayed until a connection comes in.

  3. Spawn a short lived thread from the accept future for renewals. Similar to 2 this delays retrieval and renewal of certs until accept is called, but it avoids delaying progress in the future. Downside is obviously an extra thread, which feels a bit weird.

I'm leaning towards 1, because I have a hard time imagining real world situations that don't involve a TcpListener(Stream) or one that makes giving up ownership of said stream problematic.

2 has some complexities, which I wouldn't have been able to deal with when I first implemented the crate, but have since explored in a different crate.

In case 1 the current low level interface (or something very similar) would still be available, so anyone who doesn't like transferring ownership of the TcpStream producing stream can come up with their own solution to poll the future that drives renewals.

Thoughts?

joshtriplett commented 2 years ago

@FlorianUekermann Preliminary thoughts:

I think option 1 would put more constraints on the interface, in a way that wouldn't straightforwardly fit with existing users who aren't building around Stream.

Option 2 sounds convenient to use, but I agree that it'd have problems; it'd be difficult to avoid introducing non-trivial latency every time a renewal arose.

I honestly think the most convenient runtime-independent interface is "here's a future, spawn this on your executor of choice". Have one method that's always available that returns that future you have to spawn, and then a method controlled by a feature flag that automatically spawns that future on async-std. And one day we'll have a runtime-independent interface for "spawn a future on the application's executor of choice".

FlorianUekermann commented 2 years ago

Thanks for the feedback. I tend to agree with your points. The "here is your future" method needs to be available. I feel like offering option 1 would get the majority of people 99% of the way, so I'll probably replace the current bind_listen_serve function with that approach, which makes the library almost entirely runtime independent. The remaining runtime dependecy will be smol for implementing the file io in the optional directory cache implementation, but that feels pretty ok for now.

joshtriplett commented 2 years ago

As long as the built-in directory cache is optional (for those implementing a different cache), that seems fine.

FlorianUekermann commented 2 years ago

Getting there... The 0.3-wip branch is a preview of the new api. There are still a few details (types, names, testing, new simple api variant. docs) that I need to sort out, but the bulk of the work is done. This should be pretty representative already: server_low_level.rs

True runtime independence isn't realistic (depending on what you would call a runtime), because we need something for the renewal timers and outgoing acme requests, even if the directory cache was optional. But async-std was replaced by async-io/smol, which is pretty nice. This could be cut down to just async-io if the directory cache is made optional. I'll have to check how much of a difference that makes in terms of size.

Besides the new config/builder pattern and the cache feature (see #9), the main interface change is that there is an infinite stream called State now, which needs to be polled to keep things running. The stream emits any errors and other interesting events that occur, so the user can decide to react to those or just keep polling. The rustls resolver or async-rustls acceptor can be constructed directly using State::resolver(&self) or State::acceptor(&self), which makes the low-level interface significantly less confusing imo.

FlorianUekermann commented 2 years ago

Added the replacement for the old simple api in the 0.3-wip branch. It is runtime agnostic because it does not call spawn under the hood anymore or start its own TcpListener. I think this is probably the api to use for the vast majority of cases.

let config: Config = Config::new(vec!["example.com"]);
let tcp_listener = TcpListener::bind(format!("[::]:{}", args.port)).await.unwrap();
let mut tls_incoming = config.incoming(tcp_listener.incoming());
while let Some(tls) = tls_incoming.next().await {
        //...
}

The server_simple.rs has been updated.

I'm getting pretty close to finalizing the v0.3.0 API. Besides documentation current state seems pretty complete.

FlorianUekermann commented 2 years ago

0.3.0-beta.1 is published and solves this issue. Use #18 to the continue discussion.