Closed pytestbot closed 1 year ago
While this is a nice feature, it shouldn't block 3.0. I'm entirely removing the milestone because this sounds like a nice-to-have to me.
Good feature)) but how to count asserts? by parsing tests files? but in this case you can't be sure that all asserts was called
Probably the only way would be parsing, but TBH I don't see why showing the number of asserts would be useful...
@nicoddemus btw, we can create plugin for pytest like "pytest asserts" and create pytest.assert() function. In this case we can count asserts and add some features like "clever" assertions
What would be the advantage over pytest's assertion rewriting?
You can create functions for assertions diffirent types or behaviours, like assert objects A() == B() And users can write custom matchers in easy mode And also you can get number of asserts in the end of tests =)
You can create functions for assertions diffirent types or behaviours, like assert objects A() == B() And users can write custom matchers in easy mode
You can already do that by implementing the pytest_assertrepr_compare
hook. 😉
And also you can get number of asserts in the end of tests =)
Which TBH doesn't seem so useful specially if you have to give up using plain asserts
, which is one of pytest's killer features. 😉
@myoung8 Can you maybe elaborate why this'd be useful for you?
number of assertions is a quality indicator very comparable to lines of code - it says next to nothing about the quality of a test
@RonnyPfannschmidt I think the point is probably that the number of assertions is more indicative of test quality than the number of tests, meaning 50 tests with a single assertion each is probably a lower quality test suite than 25 tests with 4-5 assertions each.
Separating out assertions into more tests is probably a better strategy most of the time anyways, but sometimes the expense of creating text fixtures make tests with many assertions much more performant (and easier to maintain).
@RonnyPfannschmidt I think the point is probably that the number of assertions is more indicative of test quality than the number of tests, meaning 50 tests with a single assertion each is probably a lower quality test suite than 25 tests with 4-5 assertions each.
I'm with @RonnyPfannschmidt on this one, I don't think this is a good indicator of test suite quality (if at all).
Out of curiosity, how is it worse of an indicator than just the base number of tests? That latter doesn't really indicate anything about test quality either.
I don't think anybody is saying that one is better than the other, just that they are both useless as indicators go, that's all. 😉
@nicoddemus
You can create functions for assertions diffirent types or behaviours, like assert objects A() == B() And users can write custom matchers in easy mode
You can already do that by implementing the
pytest_assertrepr_compare
hook.
Updated link: https://docs.pytest.org/en/latest/reference.html#_pytest.hookspec.pytest_assertrepr_compare
At least according to the docs it gets only called for failing assertions though?!
I think counting assertions might work via the assertion rewriting itself, i.e. count the number of invocations from there, but I agree that it is not that useful (and comes with costs).
I suggest closing this as won't fix.
I think this feature is quite useful but not for test quality. It's an additional check for me that my test did what I think it did. While I always make sure a test is a good failing test when I first write it, it's not always practical to go back and make sure every assertion is being run.
It's possible to write a buggy test that actually skips your assertions and an assertions ran
count would be a useful cross-check.
FWIW we now have the pytest_assertion_pass so it should be possible to count both passing and failed assertions in a plugin; this count might be used to print a summary at the end of the test session for example.
This could be a little off the mark, but I want to make the case for why someone would want to count the number of assertions. Imagine a feature similar to ava's assertion planning:
https://github.com/avajs/ava/blob/master/docs/03-assertions.md#assertion-planning
// These examples will result in a passed test:
test('resolves with 3', t => {
t.plan(1);
return Promise.resolve(3).then(n => {
t.is(n, 3);
});
});
test.cb('invokes callback', t => {
t.plan(1);
someAsyncFunction(() => {
t.pass();
t.end();
});
});
// These won't:
test('loops twice', t => {
t.plan(2);
for (let i = 0; i < 3; i++) {
t.true(i < 3);
}
}); // Fails, 3 assertions are executed which is too many
test('invokes callback synchronously', t => {
t.plan(1);
someAsyncFunction(() => {
t.pass();
});
}); // Fails, the test ends synchronously before the assertion is executed
How would one do assertion planning like the above with pytest_assertion_pass as it is now? Is it possible?
I would use this to allow contract-based stub tests:
pytest.plan(2) # or possibly call some plugin, fixture method, or self method in a test class????
class MockDocumentStore:
def __init__(self, *args, **kwargs):
pass
def get_page_id(self, *args, **kwargs):
return 1234
def update_page(self, *args, **kwargs):
assert "body" in args
raise http_error_cls("hammer time")
with pytest.raises(HttpException):
package.module.share("user", "password", "body", DocumentStoreImpl=MockDocumentStore)
Essentially, this style of testing moves away from relying on .called_with
style of functions. Not the typical python style, I know, but I've really started liking this style of testing.
Curious if more work has been done around this. Particularly interested in things like assertion planning or meta assertions via the JS community.
@myoung8 Can you maybe elaborate why this'd be useful for you?
I would have welcomed this in order to know that my tests which I wrote are actually executed. As someone already mentioned it might not always be realistic to first check that all assertions will fail. Sometimes, I call assertions in loops over different data and such, it would just be nice to see an increasing counter when I add new "sub"tests to my tests. Also, I'm used to this metric from Catch2 and I don't want to write a plugin for pytest. I'm using pytest to avoid writing my own test framework in the first place.
with the addition of the pytest_assertion_pass hook this can be experimented with
/cc @kevinkjt2000 @RonnyPfannschmidt
I experimented a bit with this trying to follow the ava pattern of handling plan
.
I'm just familiarizing myself w/ pytest plugins so there's probably improvements that can be made to how these actually get reported. Possibly we can iterate in this thread and then release it as a formalized installable plugin, here's my conftest.py
so far.
import warnings
import pytest
assertions = pytest.StashKey[int]()
def pytest_configure(config):
config.addinivalue_line(
"markers", "plan(count): mark test to expect an exact number of asserts"
)
if not config.getini("enable_assertion_pass_hook"):
pytest_option_required = (
"Add 'enable_assertion_pass_hook=true' to pytest.ini to use `pytest-plan`."
)
warnings.warn(pytest_option_required)
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
result = outcome.get_result()
marker = item.get_closest_marker("plan")
if marker:
if result.when == "call" and result.passed:
expected_asserts = marker.args[0]
asserts = item.stash[assertions]
if asserts != expected_asserts:
result.outcome = "failed"
filename, lineno, _ = result.location
result.longrepr = f"{filename}:{lineno}: planned {expected_asserts} assertions, ran {asserts}"
def pytest_assertion_pass(item, lineno, orig, expl):
item.stash[assertions] = item.stash.get(assertions, 0) + 1
And sample usage test_script.py
@pytest.mark.plan(3)
def test_fail():
assert True
assert True
# fails with planned assertion mismatch (output could be improved)
@pytest.mark.plan(102)
def test_success():
for _ in range(100):
assert True
assert True
assert True
# passes
@pytest.mark.plan(2)
def test_fail_good_plan():
assert True
assert False
# fails with regular testcase assertion
@pytest.mark.plan(2)
def test_fail_bad_plan():
assert True
assert True
assert False
# fails with regular testcase assertion
Edited to use Node.stash
based on feedback
There should be an autouse fixture for this instead of putting things on item
Closing this issue because the pytest_assertion_pass
hook makes it feasible to implement this in a third-party plugin, and I think there's enough disagreement about whether showing (or checking) the number of asserts is desirable that it should not go in Pytest itself.
Originally reported by: Michael Young (BitBucket: myoung8, GitHub: myoung8)
In addition to the number of test cases run, it would be great to see the number of assertions.