pytest-dev / pytest-subtests

unittest subTest() support and subtests fixture
MIT License
205 stars 21 forks source link

Keep track of passed tests #109

Open MariusGulbrandsen opened 1 year ago

MariusGulbrandsen commented 1 year ago

It would be nice if it was possible to keep track of the tests that were passed or that all subtests either passed or not.

For example:

import pytest
from pytest_subtests import SubTests

def test_something(subtests: SubTests):
    with subtests.test(msg="some subtest"):
        assert False

    with subtests.test(msg="some subtest"):
        assert True

    # Suggested additional code
    assert subtests.passed, "Some subtests failed"

This would make it so that I could explicitly fail a test based on failed subtests. Sometimes a test might additionally be made of of just subtests as well. It looks misleading that my test PASSED while the subtests failed. So it didn't actually pass.

tucked commented 2 months ago

FWIW, this can be worked around like so

import pytest
from pytest_subtests import SubTests

def subtest_something1():
    assert False

def subtest_something2():
    assert True

def test_something(subtests: SubTests):
    failed = 0
    with subtests.test(msg="some subtest"):
        try:
            subtest_something1()
        except Exception:
            failed += 1
            raise
    with subtests.test(msg="some subtest"):
        try:
            subtest_something2()
        except Exception:
            failed += 1
            raise

    # Suggested additional code
    assert failed == 0, "Some subtests failed"

(Obviously, pulling the subtest logic into separate functions isn't explicitly required.)

ryangalamb commented 2 months ago

Here's a fixture that gets something close to @MariusGulbrandsen 's comment.

It needs to go in your conftest.py. (It needs the hook to work.)

# content of conftest.py

import pytest
from pytest_subtests.plugin import SubTestReport

_ARE_SUBTESTS_PASSING_KEY = pytest.StashKey[bool]()

@pytest.hookimpl(wrapper=True)
def pytest_exception_interact(node, call, report):
    if not isinstance(report, SubTestReport):
        return (yield)
    # Track if subtests are passing.
    # `pytest-subtests` calls this hook to register subtest failures.
    node.stash[_ARE_SUBTESTS_PASSING_KEY] = False
    return (yield)

@pytest.fixture()
def are_subtests_passing(request):
    """Return `True` if all `pytest-subtests` tests are passing."""
    # This will get set to `False` whenever a subtest fails.
    request.node.stash[_ARE_SUBTESTS_PASSING_KEY] = True

    # Use this as a normal function, but complain if the caller accidentally coerces it to a bool.
    class _AreSubTestsPassingFixture:
        def __call__(self) -> bool:
            return request.node.stash[_ARE_SUBTESTS_PASSING_KEY]

        def __bool__(self):
            # Treat this as a typo (because it probably is.)
            raise TypeError("Did you mean 'are_subtests_passing()'?")

    return _AreSubTestsPassingFixture()

Then call the are_subtests_passing fixture in your tests like so:

def test_some_stuff(subtests, are_subtests_passing):
    assert are_subtests_passing() is True

    with subtests.test("this fails"):
        assert False
    assert are_subtests_passing() is False

I removed the ability to coerce the fixture into a bool because I caught myself accidentally using it directly in conditionals (e.g., if are_subtests_passing: ...)