tokio-rs / tokio-service

The core `Service` trait in Tokio and support
Apache License 2.0
82 stars 19 forks source link

Provide a way to detect that a service will never become ready again (aka, closed) #4

Open carllerche opened 8 years ago

carllerche commented 8 years ago

Currently, there is no way for a service implementation to signal that it is "closed". For a client, this means that the remote has terminated the socket. For a server, this means that the service doesn't wish to process any further connections on the socket (under load or server shutdown, etc...)

Finagle provides this functionality by having a status fn that returns either Ready / Busy / Closed.

One option could be to add a poll_status fn, but i am not sure where that would leave poll_ready.

seanmonstar commented 7 years ago

The ability to detect dead sockets is pretty important for connection pooling, which would be needed to release hyper 0.10 using tokio. It seems like this issue could handle it, since currently we receive a tokio_proto::pipeline::Client, which implements Service. There doesn't seem to be a way to detect if the socket is dead (and the implementation would likely need to listen to HUP (and maybe RDHUP) to do that).

cc @alexcrichton @jwilm

tailhook commented 7 years ago

Well, in my opinion, internally "service" is a Sink<(Request, Sender<Response, _>)>. A disconnected Sink returns Error on poll_complete. Then there is a connection-pool layer, that removes errored Sink from the pool, but provides same Sink interface. And then there might be some simple Client layer that doesn't have a notion of "disconnect" and is always available from the point of user (because it reconnects if needed).

alexcrichton commented 7 years ago

@seanmonstar I wonder, could this be modeled as the service resolving to an erroneous value indicating "this'll never work" and then that's hooked into to switch to a different connection?

@tailhook yeah that makes sense to me, we just gotta figure out where to get that error out of the Service trait I think (assuming hyper's working at that layer rather than the stream/sink layer.

seanmonstar commented 7 years ago

I have been using proto::pipeline::Client, which wraps the socket in something that implements Service. So then in the hyper::Client, it has a pool of these proto::pipeline::Clients that it can try to use before connecting a new socket.

I definitely could add code that just tries to use a connection no matter what, and then catch an error if the socket has been closed. However, since the socket is already registered in the event loop, it's a better experience to notice a HUP event and allow that socket to close itself. How that information is bubbled up out of a proto::pipeline::Client is what I'm interested in. I haven't read the revamp of tokio-proto, so I don't know if the structure is the same.

carllerche commented 7 years ago

@seanmonstar the core is still the same. This issue is still relevant.

@alexcrichton I'm not following your suggestion

@tailhook While Service is similar to Sink, the fact that call returns a Future of the response makes it somewhat different. As such, Service will probably not have poll_complete() at least not in the same way that Sink does.

tailhook commented 7 years ago

@carllerche I understand the current situation. At a glance in https://github.com/tokio-rs/tokio-proto/issues/75 the protocol implementation is a Sink. So what I'm trying to say is: we don't need to hide that Sink inside, but use it as a public interface. So all complex cases like connection pooling and backpressure may be solved using Sink. And only wrap in a simplified Service implementation (which might be just the extension trait on Sink) on the highest layer, where it can be perceived as a service with unlimited resources.

alexcrichton commented 7 years ago

@carllerche oh I was just thinking that the Service always returns a future for every request. In that sense if the service was "closed" it could communicate this by returning immediately-resolved futures with an error that indicate that the service is closed. Janky, but perhaps gets the job done.

carllerche commented 7 years ago

One option would be to have Service: Future<(), io::Error>, such that the service future "completes" when the service is closed. This would also include an error slot, that could be used by ClientProxy in tokio-proto to return connection level errors to the user.