console-rs / indicatif

A command line progress reporting library for Rust
MIT License
4.47k stars 244 forks source link

Fix incorrect seconds_per_step calculation #535

Closed afontenot closed 1 year ago

afontenot commented 1 year ago

seconds_per_step looks at a ring buffer with samples taken each tick. Each sample is

<duration of the tick> / <number of progress steps>

This results in an incorrect seconds_per_step, when this is calculated as a raw average of the buffer values. This is because we have no reason to assume any tick is a more representative sample than any other tick, but this will treat ticks with very few steps as more representative than steps with many ticks, because the value of the denominator is lost in the sample.

The result is a very poor ETA / bitrate estimation when step production follows a burst-wait pattern, since ticks with few or no steps are to be expected.

A better estimate could be achieved by averaging over the step-rate of each tick, but this would still not be ideal since it would treat short ticks as equally representative as long ticks in the average.

Instead, we make two changes:

First, we change the buffer to store the total number of steps rather than the difference between steps. This necessitates changing the new / reset code to set the first buffer item to zero.

Second, we create a second ring buffer to store the timestamp of each tick. As a result, we can calculate the seconds_per_step rate by simply looking at the earliest values in the buffer, and returning (now() - first_time) / (last_steps - first_steps).

By using now() instead of last_time in the above equation, we also fix an additional issue. Since we only add items to the buffer when new steps occur, the progress bar freezes and the ETA does not update when progress stalls. By always using the current time, we get a more accurate seconds_per_step estimate with no additional overhead.

Fixes https://github.com/console-rs/indicatif/issues/534.

afontenot commented 1 year ago

The following two changes required modifying the tests:

changing the new / reset code to set the first buffer item to zero.

Means that a newly initialized ring buffer contains 1 element, not 0.

using now() instead of last_time in the above equation

Depending on the latest time when calculating seconds_per_step mean that the test which checks that the rate calculation is correct can't be exact any more. Instead, I check that it is between the rates expected for two very close Instant values - immediately before and after the call to seconds_per_step.

djc commented 1 year ago

I'm not able to review this in detail for the next week or two, but here is some early feedback:

Thanks again for working on this, your contribution is much appreciated!

afontenot commented 1 year ago

@djc thanks, I'll add some comments and improve the tests

Quick note on this:

Tests for any overflow/underflow/divide-by-zero scenarios would also be great to have.

Yep, the status quo here is a bit of a mess actually. You can easily end up dividing by zero if e.g. time passes but no steps occur. It actually relies ensuring that the numerator in the rate calculation is always zero if the denominator is, so that you get a NaN float, and then explicitly tests for this elsewhere in the code.

I believe I imitated the existing behavior in this area exactly, which is the reasoning behind the strange code here:

    let mut duration: f64 = 0.0;
    if steps != 0 {
        duration = duration_to_secs(now - self.step_instants[first_pos]);
    }

But it would be good to test for this, or maybe change the behavior to not rely on NaN floats. Of course the latter would require having an opinion on the correct value to show for the ETA in the case that time has passed but no progress has been made. As I didn't have an opinion, I fell back on maintaining the status quo.

afontenot commented 1 year ago

Just a heads up that I have put improvements for this PR on hold until my work on an exponentially weighted average can see some review. If we end up switching the estimator to use that approach, this PR will become irrelevant and need to be closed.

afontenot commented 1 year ago

Going to close this for now as it looks like we will be able to merge some form of an exponentially weighted average, which solves this problem in a different way.