pytest-dev / pytest-bdd

BDD library for the pytest runner
https://pytest-bdd.readthedocs.io/en/latest/
MIT License
1.31k stars 221 forks source link

How to check if a feature or scenario is not implemented as a test ? #581

Closed nullhack closed 2 years ago

nullhack commented 2 years ago

Hi, first of all, thank you for the package, I like to be able to integrate bdd in pytest.

The issue I am trying to solve is:

I want to force pytest to fail if one or more features (ideally scenarios) are not implemented in test modules

I tried to find this information in the documentation, issues and google. But either I am looking for the wrong keywords or missing something or there's no way to achieve this without doing my own implementation.

What I could find so far is:

My issue with scenarios is that (from the tests I've performed) I need to include the implementation of the tests of all the scenarios in the same module. The behavior I am trying to achieve is having one test function that just check if all scenarios are implemented in at least one test from test folders, if not raise a NotImplementedError. The --generate-missing is the closest, but would need some extra code to check the output and fail if something is not implemented.

What is the recommended way to achieve this using currently implemented pytest-bdd functions? Is there a recommended workflow to ensure pytest-bdd will not silently ignore features/scenarios that are not implemented?

Thank you

nullhack commented 2 years ago

For anyone that is thinking about using pytest-bdd but is leaning towards behave because of this use case, I managed to find a good enough solution (at least for my use) and I am posting here for reference:

The solution I found is to use a conftest file, this will generate .py files following the features folder structure and fill each file using the scenarios approach. Each file will load scenarios of one feature if they do not already exist, so we can keep the same structure as the features folder and force one feature implementation per file but without needing to manually create each file after each new feature. This script is not perfect and will fail silently for example if there's another test file with the same name, but you can modify the script for cases that matter to you.

To use it, just create a conftest.py file in any sub-folder inside your tests folder as following:

# conftest.py
from pytest_bdd import feature
from pytest_bdd.scenario import get_features_base_dir
from pytest_bdd.utils import get_caller_module_path
from pathlib import Path

def pytest_configure(config):
    conftest_dir = Path(__file__).parent
    caller_module_path = get_caller_module_path()
    features_base_dir = get_features_base_dir(caller_module_path)
    features = feature.get_features([features_base_dir])

    for feat in features:

        feature_dir = Path(feat.filename).parent
        file_dir = (
            conftest_dir / "steps" / feature_dir.relative_to(features_base_dir)
        )
        file_name = Path(feat.filename).stem + "_test.py"
        file_path = file_dir / file_name
        feature_rel_path = Path(feat.filename).relative_to(features_base_dir)
        txt = (
            "from pytest_bdd import scenarios"
            "\n\n"
            f"""scenarios("{feature_rel_path}")"""
        )

        file_dir.mkdir(parents=True, exist_ok=True)

        if not file_path.exists():
            with open(file_path, "w") as f:
                f.write(txt)

EDIT: I worked on a Python template that includes this. If you're interested, check here it will auto-generate a documentation like: https://nullhack.github.io/python-project-template/

nullhack commented 2 years ago

Closing this issue as I found a good enough work around with conftest