pytest-dev / pytest

The pytest framework makes it easy to write small tests, yet scales to support complex functional testing
https://pytest.org
MIT License
12.11k stars 2.68k forks source link

Option to display markers in `--collect-only` output (in order to `grep` for skipped all skipped tests) #1509

Open nitrocode opened 8 years ago

nitrocode commented 8 years ago

We're skipping many of our tests and I think this is due to some missing functionality.

In order to see all the skipped tests, I have to run all the tests in verbose mode, store the output, and grep -E 'xfail|xpass' in order to see all of the specific tests that are skipped.

If there isn't a better way to already do this. I was thinking what if in the --collect-only output we could also see markers like skip or even custom markers for each test. That way I could easily collect the tests and grep for skipped ones without having to run all of them again.

The-Compiler commented 8 years ago

I'm pretty sure you could implement some pytest hook to check this (perhaps pytest_collect_modifyitems) and print the skipped items.

However, note that a test can call pytest.skip() inside the test as well, and you can't know about this while collecting.

Zac-HD commented 6 years ago

You can select tests by markers with e.g. pytest -m skip.

RonnyPfannschmidt commented 6 years ago

Marker deselection doesn't match logic based deselection, thus reopening

Zac-HD commented 6 years ago

The question is about seeing which tests have mark.skip applied though? Both of the following work for me:

pytest --quiet --collect-only -m skip
pytest --quiet --collect-only -m "not xfail and not skip"

What more needs to work in order to close this issue? If it's a new or enhanced feature that should probably have a new issue to describe the current and desired functionality.

RonnyPfannschmidt commented 6 years ago

But this wont know if a skipif actually triggers

nicoddemus commented 6 years ago

If there isn't a better way to already do this. I was thinking what if in the --collect-only output we could also see markers like skip or even custom markers for each test.

So it seems this issue can relate to two separate behaviors: showing marks during --collect-only (any mark, not only skip or skipif) and actually only running tests which actually have been skipped.

@nitrocode could you chip in in on which behavior are you interested?

nitrocode commented 6 years ago

@nicoddemus Hi everyone. The --collect-only kind of seems like a --dry-run option so the implication is that when you run this, it will scan through all the tests, and show all the tests that will run. It would be nice to also default it to show tests that have been explicitly skipped.

If in addition to that, you can also pass in a specific mark, that would be a bonus.

cj-timothy-sampson commented 1 year ago

Came here to file this feature request and spotted this ticket so commenting instead. I hacked this up with the following 5min change

--- a/src/_pytest/terminal.py
+++ b/src/_pytest/terminal.py
@@ -783,7 +783,8 @@ class TerminalReporter:
                     self._tw.line("%s: %d" % (name, count))
             else:
                 for item in items:
-                    self._tw.line(item.nodeid)
+                    marks = {m.name for m in item.iter_markers() if m.name != "parametrize"}
+                    self._tw.line(f"{item.nodeid} marks: {','.join(marks)}")
             return
         stack: List[Node] = []
         indent = ""

Would a polished up version of that be something that might be considered for inclusion (note again that ^ is basically the naivest possible solution but in general would the change to collect-only output be acceptable)? Or is this more the kind of thing that should be handled by a plugin?

We quite heavily use markers and it would be good to be able to run pytest once in collect-only mode and then grep from the output. As opposed to running multiple times with different -m whatever and comparing the various outputs of that.

RonnyPfannschmidt commented 1 year ago

This looks like a thing that could work,

We gotta figure how to show some things

Aka markers vs own markers

And whether to enable this with verbosity

Also if/when to show /filter markers

Lucky those problems can be solved in progression

RonnyPfannschmidt commented 1 year ago

I propose as a starting point we start with something that's opt in with verbosity and fits your use case, then we can iterate

stenvala commented 10 months ago

Any progress with this? This could be quite a nice feature for test discovery purposes also. For example, we have a general framework for running various integration tests in aws lambda that are actually composed of multiple tests. The tests are provided as a zip package and we use test discovery to give view at UI for the user of the available test (sets). Some tests trigger asynchronous workloads and we just need to wait for them to be ready and don't want to wait in lambda but kind of pause the test and continue with the next one later. We utilize check pattern and delay with sqs queue to implement the functionality (I know there could be more elegant ways, but this is a way) and it would be nice to know what kind of waits the test requires. We can now mark like @pytest.mark.sleep600 for 600 s wait, but we would like to have different wait patterns and finding these with pytest --collect-only -m sleep600 -q (and finding for all possible waits, how many retries are supported etc) is not so nice. It would be nice to just have in pytest --collect-only some parameter to display all markers of the tests.

carlcsaposs-canonical commented 6 months ago

+1, it'd be helpful for us to be able to check if a test was skipped during pytest_collect_modifyitems

We use pytest_collect_modifyitems to determine how many machines to provision for CI, and it'd be great to avoid provisioning machines for tests that were skipped before they started (i.e. with pytest.mark.skip or pytest.mark.skipif)

Might be missing something, but I don't see a way to do this without iterating through the markers ourselves and re-implementing parts of pytest to figure out whether a test will be skipped

more info about our use case each Python module gets its own machine (and we added a `pytest.mark.group()` marker to allow multiple machines within a Python module) pytest_collection_modifyitems: - called with `--collect-groups` option ([enables --collect-only mode](https://github.com/canonical/data-platform-workflows/blob/f86cfdfbc92c929928c0722e7542867db0b092cd/python/pytest_plugins/pytest_operator_groups/pytest_operator_groups/_plugin.py#L28-L29)) - iterates through test functions & outputs a json-encoded list of test groups (each group gets its own machine) https://github.com/canonical/data-platform-workflows/blob/f86cfdfbc92c929928c0722e7542867db0b092cd/python/pytest_plugins/pytest_operator_groups/pytest_operator_groups/_plugin.py#L163-L165 - used in GitHub actions to provision 1 machine for each group: https://github.com/canonical/data-platform-workflows/blob/f86cfdfbc92c929928c0722e7542867db0b092cd/.github/workflows/integration_test_charm.yaml#L140-L143 if all tests in a group are skipped, we'd like to be able to detect that in `pytest_collection_modifyitems` so we don't have to provision a machine that no tests will run on. (in other words, we'd like to be able to check if a single test was skipped in `pytest_collection_modifyitems`)