hyperium / tonic

A native gRPC client & server implementation with async/await support.
https://docs.rs/tonic
MIT License
10.02k stars 1.02k forks source link

Lameducking period for graceful shutdown / serve_with_shutdown #1940

Open vandry opened 2 months ago

vandry commented 2 months ago

Feature Request

Crates

tonic

Motivation

In tonic right now, serve_with_shutdown does not implement lameducking the way I am used to seeing. What I was expecting is that on SIGTERM idle keepalive connections would be disconnected, readiness health checks would start returning negative, but new connections would continue to be accepted (albeit with no keepalive) during a grace period in order to allow clients time to notice that we are lameducking and select different backends. Then eventually after a delay we would close the listening socket.

What actually happens is that we close the listening socket (or at least we stop calling accept() on it) immediately and then drain requests from existing connections.

Is that okay? I don't know, it depends on whether clients can be counted on to promply and transparently try a different backend when they get either a refused or reset connection (depending on the timing) during the short interval after we have started shutting down but before they have had a chance to update their backend list to exclude us. I feel like most gRPC clients might be all right there, but there are stories of 502s from ngnix floating about...

https://github.com/vandry/td-archive-server/commit/7e202e586ed0d3f19e576304ba1bd91ebc760edb

Proposal

First, I would like to solicit opinions about whether the behaviour I am looking for is needed or if the status quo is good enough. We definitely implement the lameducking delay as I describe it at very large scale at $DAYJOB but there might be other considerations here.

If the feature is deemed desirable then I propose:

Alternatives

See above; it is possible that the status quo is fine.

Dietr1ch commented 3 weeks ago

What actually happens is that we close the listening socket (or at least we stop calling accept() on it) immediately and then drain requests from existing connections. Is that okay?

It's not reasonable behaviour, even though, as you mentioned, it works pretty much fine in stateless services behind a load balancer where clients know how to retry.

Implementations battle tested in production perform a graceful shutdown. Tokio docs talk about graceful shutdown, and it's not hard to find people trying to do so in other stacks (C++, Java, Go from a quick search), and in Rust too, #1820.

A reference C++ implementation defines a Shutdown with a deadline argument to degrade into lame-duck mode before shutting down,

class ServerInterface : public internal::CallHook {
  public:
   ServerInterface() override {}

   template <class T>
   void Shutdown(const T& deadline);
   // ...
 }

Docs for Shutdown(const T& deadline).

Granted, they offer a Shutdown() overload too, I don't know when you'd prefer it for anything other than convenience/sloppiness.


I guess the more interesting question is on how to implement this, as Tokio's article already shows that the Service needs not only to stop serving new requests and finish the outstanding ones, but may also need to perform some Application-specific steps, which is fine, but show that maybe the interface for announcing and executing shutdown might need some thought.

Similarly, we may want to temporarily enter lame-duck mode as a way to ease operations on services downstream.

Dietr1ch commented 3 weeks ago

Well, it's also sort of blocked by instability of the underlying ecosystem,

https://github.com/hyperium/tonic/issues/1820#issuecomment-2250943423

I guess some issue de-duping is needed to keep things easy to find.