In loss estimation mode (-m losses), we want to measure how much "glitches" the loopback introduce, e.g. caused by packet losses, buffer underruns/overruns, and similar.
Current loss estimation algorithm is very naive and imprecise. We produce a continuous sine wave and measure how often per second there is a "glitch" defined as a window during which there are no samples with high amplitude. (It is implemented by applying a running maximum).
This kind of works, but measurements are unstable and it's not very clear how to interpret its units.
We need to invent a better loss ratio metric and an algorithm for its computation.
In loss estimation mode (
-m losses
), we want to measure how much "glitches" the loopback introduce, e.g. caused by packet losses, buffer underruns/overruns, and similar.Current loss estimation algorithm is very naive and imprecise. We produce a continuous sine wave and measure how often per second there is a "glitch" defined as a window during which there are no samples with high amplitude. (It is implemented by applying a running maximum).
This kind of works, but measurements are unstable and it's not very clear how to interpret its units.
We need to invent a better loss ratio metric and an algorithm for its computation.
References: