Some tests take a very long time (for example some unique tests take about 192s on a very fast machine), resulting in a total test time of something like an 30 minutes, an hour or worst, depending on the performance of the build machine.
This seems a bit excessive for a test suite that is run on make check in the context of packaging. I'd suggest only fast tests are run by make check, with benchmarks/slow tests kept for another target such as make check-extensive or similar.
Other test runner solutions such as Pytest typically deal with this with markers, e.g. 'slow' tests are skipped by default but can be selected if desired. I'm not sure if such as approach can be accomplished with the Autotools tests, hence my proposition of a different target above.
Hi!
Some tests take a very long time (for example some unique tests take about 192s on a very fast machine), resulting in a total test time of something like an 30 minutes, an hour or worst, depending on the performance of the build machine.
This seems a bit excessive for a test suite that is run on
make check
in the context of packaging. I'd suggest only fast tests are run bymake check
, with benchmarks/slow tests kept for another target such asmake check-extensive
or similar.Other test runner solutions such as Pytest typically deal with this with markers, e.g. 'slow' tests are skipped by default but can be selected if desired. I'm not sure if such as approach can be accomplished with the Autotools tests, hence my proposition of a different target above.
Thanks for this useful concurrency library!