My recollection from past discussions is that feedback was provided that maxBufferSize is probably unneeded and potentially harmful for processing video frames (there is some discussion https://github.com/w3c/mediacapture-transform/issues/69#issuecomment-838436099 at least). As such, I do not think this particular feature has reached consensus.
It is potentially harmful for video tracks as video frames may be scarce resources and buffering them may block the camera to provide fresher frames. If a web developer sets the maxBufferSize to 10 and the camera pool is 9, if web developer waits for 1 second, camera will not be able to produce new frames during that 1 second.
It is potentially unneeded as this buffering can probably be implemented in pure JavaScript by calling read() synchronously whenever the callbacks of the previous read() promise are called.
By letting JS do the buffering, they can for instance decide to decimate the frames if the buffer grows too large in the most suitable way.
My recollection from past discussions is that feedback was provided that maxBufferSize is probably unneeded and potentially harmful for processing video frames (there is some discussion https://github.com/w3c/mediacapture-transform/issues/69#issuecomment-838436099 at least). As such, I do not think this particular feature has reached consensus.
It is potentially harmful for video tracks as video frames may be scarce resources and buffering them may block the camera to provide fresher frames. If a web developer sets the maxBufferSize to 10 and the camera pool is 9, if web developer waits for 1 second, camera will not be able to produce new frames during that 1 second.
It is potentially unneeded as this buffering can probably be implemented in pure JavaScript by calling read() synchronously whenever the callbacks of the previous read() promise are called. By letting JS do the buffering, they can for instance decide to decimate the frames if the buffer grows too large in the most suitable way.