Closed csmart closed 7 years ago
It's not necessarily that we're going for performance over robustness - the current design aims to prevent situations where one slow reader doesn't affect a faster one (say, two users connecting to the console, one over a slow link). Turns out that that was probably a bad idea - as you say, it affects correctness in other cases too.
I think the fix here is to do a blocking send if client_queue_data
fails.
.. and possibly evaluate whether the other handlers need flow control (in addition to the socket handler), which may suggest an implementation in the core code, rather than repeating in each handler.
If we just block on client_queue_data, doesn't this mean that we may end up losing data on the console side? Would we be better implementing flow control on the console side when we detect a slow client? This way all clients should be sync'd because no new data is coming in or out?
Hey @jk-ozlabs @shenki don't worry about feedback about this anymore, feel free to take it up and do whatever you like. Cheers.
@csmart I notice you closed this ticket. Did you get a fix merged?
Nope, was waiting on input/review/advice.
Ok. I will reopen as the issue isn't fixed
If there are too many connections with too much data happening, the server will start dropping data in order to ensure everyone gets latest output.
The file descriptor on the serial device is non-blocking and so when the buffer gets full it just overwrites it.
@jk-ozlabs suggests that we should make the fd non-blocking until the buffer gets full, and then block.