gofiber / fiber

⚡️ Express inspired web framework written in Go
https://gofiber.io
MIT License
33.18k stars 1.64k forks source link

🚀 [Feature]: Load shedding middleware? #2341

Open caner-cetin opened 1 year ago

caner-cetin commented 1 year ago

Feature Description

I can wrap this https://github.com/asecurityteam/loadshed/blob/master/wrappers/middleware/middleware.go with Adaptor middleware of Fiber, I am aware of that, but I cannot do ctx.Next() or can't use any private functions of context when wrapping with Adaptor.

So, it would be nice to have load-shedding middleware. Put it at the top of the router order, and when the CPU is higher than %90, return 503 to any requests incoming.

And even better? Put all requests in a queue that came after CPU is higher than %90, and execute them in order.

I could do that if I wasn't too dumb for pull requests.

Additional Context (optional)

why is your code going to %100 CPU even

well, golang is widely used in scraping. and scraping tools are generally cpu hungry if you arent scraping a static-loaded website.

niche request, but yeah. i would be so happy if anyone do this.

Code Snippet (optional)

package main

import (
"github.com/gofiber/fiber/v2"
 "github.com/asecurityteam/loadshed"
loadshedmiddleware "github.com/asecurityteam/loadshed/wrappers/middleware"
"github.com/gofiber/adaptor/v2"
"log"
)
const (
    lowerThreshold  = 0.90
    upperThreshold  = 0.95
    pollingInterval = time.Second
    windowSize      = 10
)
// current setup of my mine
func main() {
  app := fiber.New()

  // An example to describe the feature

  app.Post("/get", adaptor.HTTPMiddleware(loadshedmiddleware.New(
        loadshed.New(loadshed.CPU(lowerThreshold, upperThreshold, pollingInterval, windowSize)),
        loadshedmiddleware.Callback(
            http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
                w.WriteHeader(http.StatusServiceUnavailable)
                jsonResponse := struct {
                    Message string `json:"message"`
                }{
                    Message: "Server is overloaded, please try again later.",
                }
                indented, err := json.MarshalIndent(jsonResponse, "", "  ")
                if err != nil {
                    w.Write([]byte("cpu usage too high"))
                }
                w.Write(indented)
            }),
        ),
    )))
     app.Listen("0.0.0.0:3131")
}

Checklist:

welcome[bot] commented 1 year ago

Thanks for opening your first issue here! 🎉 Be sure to follow the issue template! If you need help or want to chat with us, join us on Discord https://gofiber.io/discord

caner-cetin commented 1 year ago

https://www.npmjs.com/package/toobusy-js This is exactly what I am describing!

mirusky commented 1 year ago

I will take a look in this middleware this week and if I have chance I could try to implement it.

caner-cetin commented 1 year ago

Thanks @mirusky, https://github.com/zeromicro/go-zero/blob/master/rest/handler/sheddinghandler.go I found this example more useful than the example I gave initially.

It works exceptionally fine, but for using it I need to download whole-ass library lol.

Behzad-Khokher commented 11 months ago

If this is up for grabs, I would like to look into this.

caner-cetin commented 11 months ago

Well it didn't caught that much attention, and it is some-kinda-niche request, due to how load shedding is done with AWS, Azure or some cloud service, not from the backend.

So I would say take a look at more "requested" features.

Still, I am not the maintainer or owner, just my 2 cents

@Behzad-Khokher

Behzad-Khokher commented 11 months ago

@caner-cetin Thanks for the getting back to me. It is definitely a niche request. But for my first open-source contribution this seemed like something I could implement. But I'm still open to it, and actively looking forward into other features as well. Lets see 😃

Behzad-Khokher commented 11 months ago

@ReneWerner87 @caner-cetin @mirusky Hi guys, I have been researching and looking into this middleware. I have wrote some code, and was hoping if you guys could look at it and let me know if it's the right direction.

https://github.com/gofiber/fiber/commit/f16257793c446b9a094d50eef5f5cd037ec3eaca <- Click link

Also my main question was:

caner-cetin commented 11 months ago

Hi, just my 2 cents again. @Behzad-Khokher, maybe use semaphores for request queueing? Declare a new semaphore, with the queue size for maximum combined weight.

Then use TryAcquire when a request comes to this middleware, if the method returns false, raise the Service Unavailable error, if not, the request grabs a lock with the Acquire method and processes.

I am not familiar with the library itself, and my algorithm-building skills are literal shit, so take this advice with a grain of salt.

Behzad-Khokher commented 11 months ago

Hi @caner-cetin ,

Thank you for the feedback. I've been looking into load shedding middleware and how others are implementing them. I haven't come across any middleware that is queuing requests for load shedding.

I'm guessing that the main reason is that load shedding is more about rejecting or dropping excessive requests when the system is overloaded, rather than retaining them and trying to manage them. Like you mentioned, we already have other services like AWS doing this load balancing and management.

Another thing that concerned me is the potential for increased latency. I think it's better to reject a request quickly so a client can make a request again, rather than leaving a client waiting with an unknown response time.

Additionally, we might get stuck in a loop. Imagine for example, if the CPU becomes overloaded and we begin to queue requests to reduce the load, once we start de-queuing, it could reintroduce the system overload, trapping us in a recurring issue + increased latency for currently incoming requests which will also be queued as the queue hasn't emptied yet. I mean, this is just theoretical but it could end up being a problem.

So I think the best approach would be to just implement a mechanism that rejects requests when it goes above a certain threshold. However, I have improvised that approach to implement proportional request rejection based on system load based on higher probability of rejection as system load comes closer to UpperThreshold.

I have implemented Proportional Request Rejection Based on Probabilistic CPU Load. In summary this just means that, as the CPU usage approaches the UpperThreshold, the probability of rejecting incoming requests increases. When the CPU usage is above the UpperThreshold, all requests will be rejected.

Essentially, what is happening is that as cpu load increases, the probability of rejecting the request increases proportionally.

But again, this is my first contribution, so I'm looking forward to see what the maintainers feedback. If this is even making sense lol.

https://github.com/gofiber/fiber/commit/39a1e3007b319eb4945e5063c23c6af6d500f5ac

Do give any feedback if possible.