gofiber / fiber

āš”ļø Express inspired web framework written in Go
https://gofiber.io
MIT License
34.09k stars 1.68k forks source link

šŸ“ [Proposal]: shutdown procedure #2976

Open the-hotmann opened 7 months ago

the-hotmann commented 7 months ago

Feature Proposal Description

As discussed with @ReneWerner87 on discord I would like to propose this feature request to improve the shutdown-procedure of fiber.

I noticed that when I stop my container, open connections are cut off. I have also seen quite a few workarounds, but since there is a general need for proper shutdown handling, I guess it should be implemented by default.

there are two general signals, which at least shall be handeled:

SIGKILL

  1. Log "Fiber forces shutdown"
  2. Fiber kills itself immediately, after logging the last message

SIGTERM

  1. Log "Shutdown received - shutting gracefully down"
  2. Fiber shall not accept any new connections (probably requires all routes to be removed/disabled). Maybe even returning custom error like "shutting down" when disabled route beeing called in combination with an "503"-Error and the additional "retry after" so the incomming connections are handeled as smooth as possible
  3. Fiber checks for open connections. 3.1. Fiber waits "9s" (default gracefully shutdown timeout) long for existing connections to finish themself. 3.1.1. if "9s" pass, it cuts all existing connections and shuts down server.
  4. Check if connections has been cut, or not 4.1. if no connection was cut, Log "Server shutdown complete (Gracefull)" 4.2. if one or more connections has been cut, Log "Server shutdown complete (Graceperiod Exeeded)"

The gracefull shutdowntime (default: 9s) shall be configurable via fiber.Config like this:

    fiberConfig := fiber.Config{
        ShutdownGraceperiod:  15 * time.Second,
    }

Alignment with Express API

This does not affect anything related to the Express API

HTTP RFC Standards Compliance

This will be complient with RFC7231, specifically the Retry-After Part (Section 7.1.3)

API Stability

This would not affect the stability of the API as it solely affects the shutdown.

Feature Examples

I do not have any code snippet etc for this.

Checklist:

luk3skyw4lker commented 4 months ago

I'm gonna have a try at this if anyone isn't going at it.

luk3skyw4lker commented 4 months ago

@the-hotmann I think that what you want can already be achieved with the GracefulContext configuration. You can configure a timeout for the Graceful Shutdown or just pass a normal context.Background() or context.TODO() to it to have the server wait undefinitely until all active connections are done.

SIGKILL cannot be trapped in Golang so it's not possible to have a custom log when SIGKILL is received.

Reference on the SIGKILL: https://github.com/golang/go/issues/9463

Maybe what we could do is improve the docs to include a better explanation of the GracefulContext configuration and add the logs only. What do you think @ReneWerner87?

newacorn commented 2 months ago

Fiber is built on top of fasthttp, and fasthttp does not maintain a list of active connections. Therefore, the timeout-based forced closure feature you're asking for cannot be implemented directly. However, you can achieve this by setting read and write timeouts. If many users need fasthttp to maintain a list of active connections, we can consider planning its implementation.

the-hotmann commented 2 months ago

If many users need fasthttp to maintain a list of active connections, we can consider planning its implementation.

I believe that this would generally benefit everything built on fasthttp. A graceful shutdown is essential for most professional applications, especially in a clustered environment. When you use a clustered setup, itā€™s typically to achieve a higher SLA. However, if shutting down a container, application, or instance for an upgrade results in dropped connections, the load balancer would need to manage this to prevent data loss, which always make a setup more complex.

In my opinion, this feature should be implemented and enabled by default (with a default grace period timeout set to 8-9 seconds). But let's wait and see if others share the same perspective.

The grace period should be customizable and also have the option to be disabled (set to -1, which would mean āˆž wait - untill the docker deamon kills it hard).

-1 => wait forever 0 => dont wait at all (like now) x (any other number) => wait for x seconds.

Keep in mind that when you stop a container using Docker (docker stop container_name), there is typically a 10-second grace period. If fasthttp is set to -1, Docker will forcefully kill it after 10 seconds, which isnā€™t the intended outcome here. Setting it to 8 seconds would allow fasthttp to shut down 1 second before Dockerā€™s hard kill, at least allowing it to save what it can.

Additionally, Dockerā€™s grace period can be adjusted if needed. However, the grace period for fasthttp should always be set at least 1-2 seconds shorter than Docker's, ensuring the application shuts down properly.