Closed lukego closed 6 years ago
TBH I haven't worked on or even looked at this code in years now. It's more or less unmaintained (although I still merge PRs).
That said, I think the buffer you're talking about will just grow indefinitely, until OOM. As far as watermarks, I don't remember this ever being implemented in any way. You might have to call into cl-libuv directly to achieve what you need if this is a problem you forsee yourself having, since I'm sure it handles these kinds of interactions. Although at that point, it might make more sense to use cl-libuv directly...I can't speak to how much messy it would be.
Hope this helps at least a little bit.
Thanks for the feedback!
cl-async
looks awesome and I am glad to do some hacking with it :). I have a couple of questions about correct and robust usage.Question 1: Suppose an application is continuously writing output to a socket faster than the network is able to deliver it. This data must be buffered somewhere and the size of that buffer must be finite. What happens when that buffer overflows?
Question 2: Is there an idiom for an application to detect when the total buffered output for a socket crosses a threshold? e.g. an event callback that says when the total buffered output for a socket is above/below "watermark" thresholds so that the application can implement flow control and stop producing when the buffer is too big.
I didn't see the answers to these questions in the documentation. Glancing at the code it looks like (?) data is buffered via fast-io/static-vectors libraries and these have no explicit limits on buffer allocation. So I would imagine that the application would keep allocating memory somewhere, either on the Lisp heap or with FFI, and eventually the Lisp image or the kernel will detect out-of-memory and take action (e.g. kill the process or raise an OUT-OF-MEMORY kind of error.) Is that correct?