allure-framework / allure-python

Allure integrations for Python test frameworks
https://allurereport.org/
Apache License 2.0
713 stars 233 forks source link

Feature request: Generate "unknown" test results during pytest collection (allure-pytest) #821

Open carlcsaposs-canonical opened 1 month ago

carlcsaposs-canonical commented 1 month ago

[//]: # ( . Note: for support questions, please use Stackoverflow or Gitter. . This repository's issues are reserved for feature requests and bug reports. . . In case of any problems with Allure Jenkins plugin please use the following repository . to create an issue: https://github.com/jenkinsci/allure-plugin/issues . . Make sure you have a clear name for your issue. The name should start with a capital . letter and no dot is required in the end of the sentence. An example of good issue names: . . - The report is broken in IE11 . - Add an ability to disable default plugins . - Support emoji in test descriptions )

I'm submitting a ...

What is the current behavior?

allure-pytest generates test results for each test after it finishes

If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem

What is the expected behavior?

allure-pytest generates "unknown" test result for each test during collection time & replaces that test result if the test completes

What is the motivation / use case for changing the behavior?

Currently, on our CI, if the test step times out (because of GitHub Actions timeout-minutes), tests that did not complete will be missing from the Allure Report.

We also distribute our tests across multiple GitHub Actions runners (to run in parallel), and if the set up of a runner fails, all of its tests will be missing from the Allure Report

Please tell us about your environment:

Other information

We would be happy to open a PR to implement this, but we're not sure how to refactor/structure the code (currently, most of the logic to generate test reports is coupled to the pytest hooks)

There's two use cases here:

  1. for normal test execution, generate unknown test results during collection time & replace them during execution time
  2. for running tests with collect-only, generate unknown test results
    • this is our use case, since we run pytest collection on one runner & use the results to provision multiple runners. If any of the test execution runners fail to set up, we would still like to show "unknown" results for all tests that were scheduled on that runner

We've actually implemented this already in a pytest plugin that imports from allure-pytest, but would love to upstream this (if that's something you'd want) so we can de-duplicate this code & so that our plugin doesn't break if allure-pytest code internals are modified