Documentation: https://jerry-git.github.io/pytest-split
Source Code: https://github.com/jerry-git/pytest-split
PyPI: https://pypi.org/project/pytest-split/
Pytest plugin which splits the test suite to equally sized "sub suites" based on test execution time.
pytest-test-groups
is great but it does not take into account the execution time of sub suites which can lead to notably unbalanced execution times between the sub suites.pytest-xdist
is great but it's not suitable for all use cases.
For example, some test suites may be fragile considering the order in which the tests are executed.
This is of course a fundamental problem in the suite itself but sometimes it's not worth the effort to refactor, especially if the suite is huge (and smells a bit like legacy).
Additionally, pytest-split
may be a better fit in some use cases considering distributed execution.pip install pytest-split
First we have to store test durations from a complete test suite run.
This produces .test_durations file which should be stored in the repo in order to have it available during future test runs.
The file path is configurable via --durations-path
CLI option.
pytest --store-durations
Then we can have as many splits as we want:
pytest --splits 3 --group 1
pytest --splits 3 --group 2
pytest --splits 3 --group 3
Time goes by, new tests are added and old ones are removed/renamed during development. No worries!
pytest-split
assumes average test execution time (calculated based on the stored information) for every test which does not have duration information stored.
Thus, there's no need to store durations after changing the test suite.
However, when there are major changes in the suite compared to what's stored in .test_durations, it's recommended to update the duration information with --store-durations
to ensure that the splitting is in balance.
The splitting algorithm can be controlled with the --splitting-algorithm
CLI option and defaults to duration_based_chunks
. For more information about the different algorithms and their tradeoffs, please see the section below.
Lists the slowest tests based on the information stored in the test durations file. See slowest-tests --help
for more
information.
pytest-random-order
and pytest-randomly
:
⚠️ pytest-split
running with the duration_based_chunks
algorithm is incompatible with test-order-randomization plugins.
Test selection in the groups happens after randomization, potentially causing some tests to be selected in several groups and others not at all.
Instead, a global random seed needs to be computed before running the tests (for example using $RANDOM
from the shell) and that single seed then needs to be used for all groups by setting the --random-order-seed
option.
nbval
: pytest-split
could, in principle, break up a single IPython Notebook into different test groups. This most likely causes broken up pieces to fail (for the very least, package import
s are usually done at Cell 1, and so, any broken up piece that doesn't contain Cell 1 will certainly fail). To avoid this, after splitting step is done, test groups are reorganized based on a simple algorithm illustrated in the following cartoon:
where the letters (A to E) refer to individual IPython Notebooks, and the numbers refer to the corresponding cell number.
The plugin supports multiple algorithms to split tests into groups.
Each algorithm makes different tradeoffs, but generally least_duration
should give more balanced groups.
Algorithm | Maintains Absolute Order | Maintains Relative Order | Split Quality | Works with random ordering |
---|---|---|---|---|
duration_based_chunks | ✅ | ✅ | Good | ❌ |
least_duration | ❌ | ✅ | Better | ✅ |
Explanation of the terms in the table:
pytest-randomly
The duration_based_chunks
algorithm aims to find optimal boundaries for the list of tests and every test group contains all tests between the start and end boundary.
The least_duration
algorithm walks the list of tests and assigns each test to the group with the smallest current duration.
poetry install
poetry shell
pytest
The documentation is automatically generated from the content of the docs directory and from the docstrings of the public signatures of the source code. The documentation is updated and published as a Github Pages page automatically as part each release.
Trigger the Draft release workflow (press Run workflow). This will update the changelog & version and create a GitHub release which is in Draft state.
Find the draft release from the GitHub releases and publish it. When a release is published, it'll trigger release workflow which creates PyPI release and deploys updated documentation.
Pre-commit hooks run all the auto-formatting (ruff format
), linters (e.g. ruff
and mypy
), and other quality
checks to make sure the changeset is in good shape before a commit/push happens.
You can install the hooks with (runs for each commit):
pre-commit install
Or if you want them to run only for each push:
pre-commit install -t pre-push
Or if you want e.g. want to run all checks manually for all files:
pre-commit run --all-files
This project was generated using the wolt-python-package-cookiecutter template.