smarie / python-pytest-steps

A tiny package to ease the creation of test steps with shared intermediate results/state.
https://smarie.github.io/python-pytest-steps/
BSD 3-Clause "New" or "Revised" License
57 stars 5 forks source link

marking pytest.fixture as a step in the final report #17

Closed rplevka closed 5 years ago

rplevka commented 5 years ago

Hey, first of all - big fan of your plugin. I'm trying to use it on top of a parametrized test, however, all combinations use the common setup which i obviously try to move to the pytest fixture, to be performed just once. However, this way i can't mark those actions as a step, so in the final report nobody sees they were happening.

what i'd like to achieve is to end up with the following report: (given test_e2e with params ['param1', 'param2'] and steps ['step1', 'step2']and a fixture fixture1:

collected 5 items                                                                            

test.py::test_e2e[fixture1]     PASSED                                           [ 20%]
test.py::test_e2e[step1-param1] PASSED                                           [ 40%]
test.py::test_e2e[step1-param2] PASSED                                           [ 60%]
test.py::test_e2e[step2-param1] PASSED                                           [ 80%]
test.py::test_e2e[step2-param2] PASSED                                           [100%]

=============================== 4 passed in 0.02 seconds ===============================

do you think this is achievable in any way? (i don't insist on using the fixtures, however i don't want to generate that step for every parameter)

rplevka commented 5 years ago

or, perhaps to rephrase: Can I parametrize the individual steps?

smarie commented 5 years ago

Hi @rplevka, thanks for the feedback !

I am not sure that you use the fixtures for what they are meant for: a fixture is a fixed object that is created before any test is run, and the tests can run against it. You can see it as the "starter kit" of your tests. So you are right to put here everything that should be done as "setup": fixture instanciation is not part of the test. In addition:

Now, if we forget the fixtures. You can simply do

@test_steps(['step1', 'step2'])
@pytest.mark.parametrize('param', [1, 2], ids=str)
def test_e2e(param):
    # this is step 1
    yield
    # this is step 2
    yield

And this will generate the two steps for each parameter value = 4 tests

You can also revert the order of @test_steps and @pytest.mark.parametrize if you wish to get the same order than in your example.

@pytest.mark.parametrize('param', [1, 2], ids=str)
@test_steps(['step1', 'step2'])
def test_e2e(param):
    # this is step 1
    yield
    # this is step 2
    yield

I hope that helps ?

rplevka commented 5 years ago

@smarie thanks for your response. I know I tried to 'misuse' the fixtures there. I did so as I wanted to make use of the fixture nesting ability, while each fixture might contain different parameters. I wanted to do this to prevent the "full matrix execution" - meaning, that the common paths (from the top of the tree are not duplicated for each parameter, and the paths are split at the desired level). This is why I'm asking, whether i can parametrize the individual steps. I'm not sure I made myself clear so I'll better paste a diagram of the desired "execution tree" parametrized_steps

instead of: param_tests

Perhaps the way to go is to make the common steps a standalone tests (i just need to make pytest execute them in the same order, or to set some 'dependency' relationship between them not to distribute them to individual workers if xdist is used, etc).

The motivation behind this is that i write system tests (black box) and every path is really expensive, so I'd like to reuse the the common steps and mark them dependent on each other.

smarie commented 5 years ago

I see your point. This scenario is clearly not what pytest-steps has been designed for. I think that the best workaround we can reach as of today (xdist might not work however) would be to

Not ideal, but should work. If you think there is a way to make this easier from a user API perspective, we can discuss on how to propose this in pytest-steps ; but my fear is that it would be quite difficult to find an easy way for users to declare all of that with decorators.

rplevka commented 5 years ago

@smarie yeah, i can see more clearly that this should really be out of the scope of this plugin. However thanks for the suggestions, they are really inspirational.

smarie commented 5 years ago

It seems to work:

import pytest
from pytest_steps import test_steps

@pytest.fixture(scope='module')
def results_dct():
    return dict()

@test_steps('step1', 'step2')
def test_1_2(results_dct):
    # step 1
    results_dct['step1'] = 1
    yield
    # step 2
    results_dct['step2'] = 'hello'
    yield

@test_steps('step3', 'step4')
@pytest.mark.parametrize('p', ['a', 'b'], ids="p={}".format)
def test_3_4(p, results_dct):
    if 'step2' not in results_dct:
        pytest.skip("Can not start step 3: step 2 has not run successfuly")
    # step 3
    results_dct.setdefault('step3', dict())[p] = 'bla'
    if p == 'b':
        # let's make this step fail so that we see the rest of the tree fail
        assert False
    yield
    # step 4
    results_dct.setdefault('step4', dict())[p] = 'blabla'
    yield

@test_steps('step5', 'step6')
@pytest.mark.parametrize('q', ['a', 'b'], ids="q={}".format)
@pytest.mark.parametrize('p', ['a', 'b'], ids="p={}".format)
def test_5_6(p, q, results_dct):
    if 'step4' not in results_dct:
        pytest.skip("Can not start step 5: step 4 has not run successfuly")
    elif p not in results_dct['step4']:
        pytest.skip("Can not start step 5: step 4 has not run successfuly for this value of p=%s" % p)
    # step 5
    yield
    # step 6
    yield

image

rplevka commented 5 years ago

@smarie, now that is really cool. Thanks for trying that out. I think i'll try to use the pytest-dependency to have it more readable, but i wouldn't be surprised if it worked the same way under the hood.

I guess i can close the issue now. Thanks again.

smarie commented 5 years ago

Thanks for the feedback! If you find a way to go with pytest-dpendency, would you be so kind to post here an equivalent example ? Indeed I plan to release shortly a "pytest-patterns" project where I will put example design patterns for typical pytest tasks. I think that this would perfectly fit!

rplevka commented 5 years ago

sure thing.