facil-io / cstl

facil.io C STL - a Server Toolbox Library for C, including JSON processing, hash maps, dynamic arrays, binary strings and more.
https://facil.io
MIT License
68 stars 6 forks source link

Consider making queues optionally bounded #24

Closed michaellenaghan closed 11 months ago

michaellenaghan commented 11 months ago

Currently, the async queue in particular is unbounded. If the arrival rate of new requests significantly exceeds the processing rate of those requests, the queue will grow without bound — or, more specifically, until the process runs out of memory.

Consider making all queues optionally bounded (and unbounded by default).

Here's one possible approach:

If you approve of the idea, let me know if you'd like a pull request.

boazsegev commented 11 months ago

Hi, I'm open to discuss this, but, as I wrote on the other thread (issue #17)...:

Shouldn't the async queue be bounded

I don't think it's safe to have the async queue add back-pressure.

IMHO, Placing pressure on the async queue should be allowed and security concerns about this issue should be handled up-stream.

For example, HTTP connections allow only a single HTTP request to be handled at a time (per client), preventing any client from request-flooding. The WebSocket logic also pauses and resumes the incoming IO in an attempts to prevent a fast client from flooding the server.

This doesn't promise queue flooding security, as this also depends on the rest of the code - but it does prevent request/message flooding.

IMHO, the most likely scenario would be that unscheduled tasks will be performed immediately, which might cause some tasks to run on the wrong thread or for possible deadlocks to occur. Having a memory hungrier queue (or having the worker process crashing) seems much safer.

michaellenaghan commented 11 months ago

Sorry, I missed the response in the other issue. And I've looked at the queue code, but not the http code; I didn't realize that "HTTP connections allow only a single HTTP request to be handled at a time (per client)."

Thanks!