Open janev94 opened 6 months ago
Thanks for sharing these observations. We use latencyInMs
to estimate the time between starting the request and receiving the first byte. By default, useDeadTimeLatency
is set to true
, which means we are not adding the latency to the throughput calculation:
if (isNaN(throughputInKbit)) {
const referenceTimeInMs = settings.get().streaming.abr.throughput.useDeadTimeLatency ? downloadTimeInMs : downloadTimeInMs + latencyInMs;
throughputInKbit = Math.round((8 * downloadedBytes) / referenceTimeInMs) // bits/ms = kbits/s
}
With version 5 we added support for the Resource Timing API. I don't see an alternative to get more accurate values from the browser. Do you have a suggestion what we should change?
I have built a custom test environment using Linux TC. In my experiments I connect a dash.js client to an Nginx HTTP/2 server and use abrThroughput as a bit-rate selection algorithm.
I have noticed that latencyInMs could sometimes vary, and in turn lead to a poor abr selection decision.
I managed to track down instances where this happens by looking at a tcpdump pcap and I have noticed that the method dash.js uses to calculate latencyInMs is not always representative of the connection RTT. Below is a screenshot of a pcap that shows this:
packet 136285 is when the HTTP request is sent packet 126287 is when the first data from the HTTP response is received. dash.js will calculate the difference between these two packets as latency, but the connection RTT is lower than that, as seen by the ACK packet.
I am aware that there is no way for dash.js to get transport-level metrics such as the RTT, but I wondered if latencyInMs was supposed to approximate the connection RTT, and if so, if the maintainers are aware that such an issue could happen?
Taking the EWMA value of the measured latency does not seem sufficient since as demonstrated the "incorrect" value could be much higher to fool the EWMA calculation too.