canonical / hotsos

Software analysis toolkit. Define checks in high-level language and leverage library to perform analysis of common Cloud applications.
Apache License 2.0
32 stars 38 forks source link

tox.ini/stestr: remove "--serial" to enable test parallelization #880

Closed mustafakemalgilor closed 4 months ago

mustafakemalgilor commented 4 months ago

Having "--serial" impedes the parallelization and thus affects the unit test execution speed greatly. We have no real reason to have "--serial" in place, it both makes the test execution slower and possibly hides hidden dependencies between the test cases. The unit tests must be able to run in parallel and should be indifferent to the ordering.

This patch removes the "--serial" argument, and adds "--random" and "--slowest" arguments, which randomize the order of the test cases and report the slowest test cases respectively.

There was an attempt to introduce this by @nicolasbock, but the patch hasn't landed. I think we should reconsider it. Enabling parallelization reduces both the CI pipeline time and the local test run times, and this could benefit everyone working on the project since it's hard to come by with a single-core machine nowadays.

Below are the run time differences between the baseline(main) and this patch(stestr-parallel):

Command tox:


main:

  py3: OK (66.73=setup[0.03]+cmd[0.06,66.64] seconds)
  coverage: OK (2.26=setup[0.01]+cmd[0.07,0.60,0.84,0.74] seconds)
  pep8: OK (0.27=setup[0.01]+cmd[0.26] seconds)
  pylint: OK (3.15=setup[0.01]+cmd[3.14] seconds)
  bashate: OK (0.05=setup[0.01]+cmd[0.04] seconds)
  yamllint: OK (1.13=setup[0.01]+cmd[1.12] seconds)
  functional: OK (37.35=setup[0.01]+cmd[37.34] seconds)
  hotyvalidate: OK (0.41=setup[0.01]+cmd[0.41] seconds)
  congratulations :) (111.38 seconds)

stestr-parallel:

  py3: OK (10.21=setup[0.03]+cmd[0.05,10.12] seconds)
  coverage: OK (2.53=setup[0.01]+cmd[0.43,0.55,0.82,0.73] seconds)
  pep8: OK (0.27=setup[0.01]+cmd[0.26] seconds)
  pylint: OK (2.93=setup[0.01]+cmd[2.93] seconds)
  bashate: OK (0.05=setup[0.01]+cmd[0.04] seconds)
  yamllint: OK (1.19=setup[0.01]+cmd[1.18] seconds)
  functional: OK (35.99=setup[0.01]+cmd[35.98] seconds)
  hotyvalidate: OK (0.42=setup[0.01]+cmd[0.41] seconds)
  congratulations :) (53.62 seconds)

results: 6.53x improvement in "py3" runtime 2.07x improvement in total runtime

Command tox --parallel:


main:

  py3: OK (68.94=setup[0.17]+cmd[0.05,68.71] seconds)
  coverage: OK (2.25=setup[0.01]+cmd[0.08,0.55,0.83,0.78] seconds)
  pep8: OK (0.58=setup[0.17]+cmd[0.41] seconds)
  pylint: OK (3.62=setup[0.17]+cmd[3.45] seconds)
  bashate: OK (0.22=setup[0.17]+cmd[0.05] seconds)
  yamllint: OK (2.23=setup[0.17]+cmd[2.06] seconds)
  functional: OK (38.99=setup[0.16]+cmd[38.83] seconds)
  hotyvalidate: OK (0.81=setup[0.18]+cmd[0.64] seconds)
  congratulations :) (71.23 seconds)

stestr-parallel:

  py3: OK (12.12=setup[0.21]+cmd[0.06,11.86] seconds)
  coverage: OK (2.58=setup[0.01]+cmd[0.43,0.57,0.83,0.74] seconds)
  pep8: OK (0.75=setup[0.21]+cmd[0.54] seconds)
  pylint: OK (4.42=setup[0.21]+cmd[4.21] seconds)
  bashate: OK (0.26=setup[0.21]+cmd[0.05] seconds)
  yamllint: OK (2.09=setup[0.20]+cmd[1.88] seconds)
  functional: OK (41.57=setup[0.20]+cmd[41.37] seconds)
  hotyvalidate: OK (0.87=setup[0.21]+cmd[0.66] seconds)
  congratulations :) (41.61 seconds)

results: 5.68x improvement in "py3" runtime 1.71x improvement in total runtime

Some samples from the recent CI runs:

This MP: image

PR 872: image

Last "main" CI run: image