yunojuno / django-juno-testrunner

A more useful (and slightly more glamorous) test runner for Django 1.6+ from the folks at YunoJuno
MIT License
7 stars 9 forks source link

improve remaining time estimation #10

Open marcelkornblum opened 9 years ago

marcelkornblum commented 9 years ago

It's currently very poor; on a current project the estimate never goes above 7.5 minutes but the total time is over 20 minutes.

Perhaps rely on the last run time; perhaps even per-test in order to estimate for reruns etc..?

stevejalim commented 9 years ago

Current estimation code is https://github.com/yunojuno/django-juno-testrunner/blob/master/junorunner/extended_runner.py#L152

    @property
    def _estimated_time(self):
        "Calculate and estimated time to complete the test run."
        elapsed = self._elapsed_time
        factor = float(self.total_tests) / float(self.current_test_number)
        if self.current_test_number > self.total_tests:
            return 0
        else:
            estimate = self._elapsed_time * factor
            return estimate - elapsed
stevejalim commented 9 years ago

@marcelkornblum Next time you get a slow run, can you paste some of the examples of the estimated remaining time never going over X? I've not had that happen to me.

stevejalim commented 9 years ago

FWIW, I don't see the same behaviour as @marcelkornblum

[..1629 <- 1726] Elapsed: 00:16:43; Remaining: 00:00:59; Errors: 0, Failures: 0, Skipped: 41, Passed: 1588 ]

marcelkornblum commented 9 years ago

Just re-reading this, I should clarify that the estimate remaining time never goes over x... Here are some lines from a run I'm doing ATM (it's worth noting that the test suite has moved on since I first raised this)

[..0492 <- 1772] Elapsed: 00:01:58; Remaining: 00:05:07;  Errors: 0,  Failures: 0,  Skipped: 17,  Passed: 475 ]
...
[..0859 <- 1772] Elapsed: 00:09:32; Remaining: 00:10:07;  Errors: 0,  Failures: 0,  Skipped: 25,  Passed: 834 ]
...
[..1054 <- 1772] Elapsed: 00:14:48; Remaining: 00:10:04;  Errors: 0,  Failures: 0,  Skipped: 26,  Passed: 1028 ]
...
[..1341 <- 1772] Elapsed: 00:21:06; Remaining: 00:06:45;  Errors: 0,  Failures: 0,  Skipped: 42,  Passed: 1299 ]
...
[..1530 <- 1772] Elapsed: 00:25:48; Remaining: 00:04:03;  Errors: 0,  Failures: 0,  Skipped: 43,  Passed: 1487 ]
...
[..1772 <- 1772] Elapsed: 00:31:01; Remaining: 00:00:00;  Errors: 0,  Failures: 0,  Skipped: 45,  Passed: 1727 ]

It's actually more of an issue towards the beginning of the suite than the end, as the estimation average obviously settles down - the second and third outputs above illustrate this.

One possible fix is to log the runtime of each test (or just the total and the parameters) to make estimation better. Another might be to reorder the run to put the longest running first so at least the estimate starts high and decreases (again based on previous run).