pytest-dev / pytest-bdd

BDD library for the pytest runner
https://pytest-bdd.readthedocs.io/en/latest/
MIT License
1.31k stars 220 forks source link

Logging of step progress/failures? #117

Closed lrowe closed 9 years ago

lrowe commented 9 years ago

I'm probably missing something obvious, but is there a plain text reporting option that logs progress step by step to stdout?

bubenkoff commented 9 years ago

if you mean scenario by scenario then just add -v we don't do anything special for stdout reporting - pytest does it https://pytest.org/latest/usage.html

bubenkoff commented 9 years ago

there's also nice plugin about console sugar https://github.com/Frozenball/pytest-sugar

lrowe commented 9 years ago

I'm really looking for feedback per step as browser steps can take some time. This kind of works, but gets caught up in the stdout capturing:

def pytest_bdd_before_scenario(request, feature, scenario):
    terminal = request.config.pluginmanager.getplugin("terminalreporter")
    report = request.node.__scenario_report__
    line = 'Scenario: {scenario.name}'.format(scenario=report.scenario)
    terminal.write_line(line)

def pytest_bdd_step_error(request, feature, scenario, step, step_func, step_func_args, exception):
    terminal = request.config.pluginmanager.getplugin("terminalreporter")
    report = request.node.__scenario_report__.current_step_report
    line = 'Step: {step.name} FAILED'.format(step=report.step)
    terminal.write_line(line)

def pytest_bdd_after_step(request, feature, scenario, step, step_func, step_func_args):
    terminal = request.config.pluginmanager.getplugin("terminalreporter")
    report = request.node.__scenario_report__.current_step_report
    line = 'Step: {step.name} PASSED'.format(step=report.step)
    terminal.write_line(line)
bubenkoff commented 9 years ago
def write_line(request, line):
    """Write line instantly."""
    terminal = request.config.pluginmanager.getplugin('terminalreporter')
    capman = request.config.pluginmanager.getplugin("capturemanager")
    capman.suspendcapture()
    try:
        terminal.write_line(line)
    finally:
        capman.resumecapture()

@pytest.mark.trylast
def pytest_bdd_before_scenario(request, feature, scenario):
    write_line(request, u'Scenario: {scenario.name}'.format(scenario=scenario))

@pytest.mark.trylast
def pytest_bdd_step_error(request, feature, scenario, step, step_func, step_func_args, exception):
    write_line(request, u'Step: {step.name} FAILED'.format(step=step))

@pytest.mark.trylast
def pytest_bdd_after_step(request, feature, scenario, step, step_func, step_func_args):
    write_line(request, u'Step: {step.name} PASSED'.format(step=step))
lrowe commented 9 years ago

To avoid swallowing stdout/stderr I needed to add report sections:

def write_line(request, when, line):
    """Write line instantly."""
    terminal = request.config.pluginmanager.getplugin('terminalreporter')
    capman = request.config.pluginmanager.getplugin("capturemanager")
    out, err = capman.suspendcapture()
    try:
        request.node.add_report_section(when, "out", out)
        request.node.add_report_section(when, "err", err)
        terminal.write_line(line)
    finally:
        capman.resumecapture()

@pytest.mark.trylast
def pytest_bdd_before_scenario(request, feature, scenario):
    write_line(request, 'setup', u'Scenario: {scenario.name}'.format(scenario=scenario))

@pytest.mark.trylast
def pytest_bdd_step_error(request, feature, scenario, step, step_func, step_func_args, exception):
    write_line(request, 'call', u'Step: {step.name} FAILED'.format(step=step))

@pytest.mark.trylast
def pytest_bdd_after_step(request, feature, scenario, step, step_func, step_func_args):
    write_line(request, 'call', u'Step: {step.name} PASSED'.format(step=step))

More flexible reporting would require collecting individual steps. This is what I do in my current wrapping of behave, but I definitely prefer pytest-bdd's use of fixtures of context objects. https://github.com/ENCODE-DCC/encoded/blob/v26.0/src/encoded/tests/bdd.py

bubenkoff commented 9 years ago

that's cool! just im not sure, why would you need instant reporting for steps? what's the use-case, still

lrowe commented 9 years ago

For scenarios that take a a few minutes to run it is kind of nice to see progressive feedback. Also useful to know exactly where it got stuck when Travis CI kills a job.

bubenkoff commented 9 years ago

could you just avoid those kind of scenarios in first place? single test should not take minutes

bubenkoff commented 9 years ago

not sure what should we do with this issue document how the progress can be reported? make a flag fixture and corresponding command line option to enable reporting?

lrowe commented 9 years ago

I think perhaps it's not worth addressing this issue directly. Perhaps revisit in the future if feature files were collected. Removing the need for the scenario decorator / scenarios helper seems worthwhile, but doing so in a backwards compatible way seems tricky.