[//]: # (
. Note: for support questions, please use Stackoverflow or Gitter.
. This repository's issues are reserved for feature requests and bug reports.
.
. In case of any problems with Allure Jenkins plugin please use the following repository
. to create an issue: https://github.com/jenkinsci/allure-plugin/issues
.
. Make sure you have a clear name for your issue. The name should start with a capital
. letter and no dot is required in the end of the sentence. An example of good issue names:
.
. - The report is broken in IE11
. - Add an ability to disable default plugins
. - Support emoji in test descriptions
)
I'm submitting a ...
[ ] bug report
[x] feature request
[ ] support request => Please do not submit support request here, see note at the top of this template.
What is the current behavior?
allure-pytest generates test results for each test after it finishes
If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
What is the expected behavior?
allure-pytest generates "unknown" test result for each test during collection time & replaces that test result if the test completes
What is the motivation / use case for changing the behavior?
Currently, on our CI, if the test step times out (because of GitHub Actions timeout-minutes), tests that did not complete will be missing from the Allure Report.
We also distribute our tests across multiple GitHub Actions runners (to run in parallel), and if the set up of a runner fails, all of its tests will be missing from the Allure Report
Please tell us about your environment:
Allure version: 2.29.0-1
Test framework: pytest@7.4.0
Allure adaptor: allure-pytest@2.13.5
Other information
We would be happy to open a PR to implement this, but we're not sure how to refactor/structure the code (currently, most of the logic to generate test reports is coupled to the pytest hooks)
There's two use cases here:
for normal test execution, generate unknown test results during collection time & replace them during execution time
for running tests with collect-only, generate unknown test results
this is our use case, since we run pytest collection on one runner & use the results to provision multiple runners. If any of the test execution runners fail to set up, we would still like to show "unknown" results for all tests that were scheduled on that runner
We've actually implemented this already in a pytest plugin that imports from allure-pytest, but would love to upstream this (if that's something you'd want) so we can de-duplicate this code & so that our plugin doesn't break if allure-pytest code internals are modified
[//]: # ( . Note: for support questions, please use Stackoverflow or Gitter. . This repository's issues are reserved for feature requests and bug reports. . . In case of any problems with Allure Jenkins plugin please use the following repository . to create an issue: https://github.com/jenkinsci/allure-plugin/issues . . Make sure you have a clear name for your issue. The name should start with a capital . letter and no dot is required in the end of the sentence. An example of good issue names: . . - The report is broken in IE11 . - Add an ability to disable default plugins . - Support emoji in test descriptions )
I'm submitting a ...
What is the current behavior?
allure-pytest generates test results for each test after it finishes
If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
What is the expected behavior?
allure-pytest generates "unknown" test result for each test during collection time & replaces that test result if the test completes
What is the motivation / use case for changing the behavior?
Currently, on our CI, if the test step times out (because of GitHub Actions
timeout-minutes
), tests that did not complete will be missing from the Allure Report.We also distribute our tests across multiple GitHub Actions runners (to run in parallel), and if the set up of a runner fails, all of its tests will be missing from the Allure Report
Please tell us about your environment:
Other information
We would be happy to open a PR to implement this, but we're not sure how to refactor/structure the code (currently, most of the logic to generate test reports is coupled to the pytest hooks)
There's two use cases here:
We've actually implemented this already in a pytest plugin that imports from
allure-pytest
, but would love to upstream this (if that's something you'd want) so we can de-duplicate this code & so that our plugin doesn't break if allure-pytest code internals are modified--allure-collection-dir
is passed: https://github.com/canonical/data-platform-workflows/blob/v13.3.4/python/pytest_plugins/allure_pytest_collection_report/allure_pytest_collection_report/_plugin.py