Currently the server streams out the first chunk as is and the chunk length is the buffer size. It may happen that the user will hit the millisecond when the file has just appeared and will get served rather quickly and may see a very small stutter at the next chunk, however that being served out at maximum available speed - it's going to be quite small and hasn't really been observed in tests.
When #1 is fixed, we'll loose the buffer size and the stream will be sent out at the speed of 1 real time and the sending buffer will be close to 0 (as well as the latency), which may result in more stutters in the begining.
The solution to that would be to introduce a small, controllable defer before starting sending out the first chunk to queued clients - but serve them immediately if they appear after that defer. This would mean that the latency would be between the configured minimum and the chunk length.
Currently the server streams out the first chunk as is and the chunk length is the buffer size. It may happen that the user will hit the millisecond when the file has just appeared and will get served rather quickly and may see a very small stutter at the next chunk, however that being served out at maximum available speed - it's going to be quite small and hasn't really been observed in tests.
When #1 is fixed, we'll loose the buffer size and the stream will be sent out at the speed of 1 real time and the sending buffer will be close to 0 (as well as the latency), which may result in more stutters in the begining.
The solution to that would be to introduce a small, controllable defer before starting sending out the first chunk to queued clients - but serve them immediately if they appear after that defer. This would mean that the latency would be between the configured minimum and the chunk length.