Closed naveedriay closed 6 years ago
This looks identical to the setup I am using. Can you create a minimal reproducer and upload it as a github repo?
Hi mpkorstanje
my command to run the suite was
mvn clean install "-Dcucumber.options=--tags @sanity"
and that gave me the whole lot feature run despite creating 5 runners files (for each scenario).
but when i change the above command (by removing the cucumber.options line)
mvn clean install
it did create the 5 runner files and only run them once.
Now, having tag based execution is a necessity in my project as different jenkins jobs runs different features based on what type testing required. i.e. regression, sanity, smoke, pre-deploy etc etc.
Now the drive to use this <parallelScheme>SCENARIO</parallelScheme>
comes from here https://stackoverflow.com/questions/50115819/cucumber-json-report-getting-overwritten-by-rerun-scenario-report
So either, i have to sacrifice the cucumber-report for the rerun tests in a feature to keep the tag based test running OR i have to sacrifice the tag based running for a full correct report. hmmmmm in a dilemma.
my command to run the suite was
mvn clean install "-Dcucumber.options=--tags @sanity"
Ah that helps! I am assuming @sanity
is applied to the whole feature.
The JUnit runner creates RuntimeOptions using the RuntimeOptionsFactory. The RuntimeOptionsFactory passes feature with line number to RuntimeOptions as a list of commandline options.
These are then parsed. This will create a a line filter. After that RuntimeOptions will parse the cucumber.option
environment property. This wipes out the previously set line filters.
This is annoying, in this case counter intuitive, but unfortunately also intentional. It works as intended when using a single runner.
Now, having tag based execution is a necessity in my project as different jenkins jobs runs different features based on what type testing required.
I'd suggest you look into configuring different maven profiles to control the configuration of the tags
section of cucumber-jvm-parallel-plugin
.
Also I try to use <parallelScheme>SCENARIO</parallelScheme>
with <rerunFailingTestsCount>2</rerunFailingTestsCount>
and as result cucumberReport.json(s) are empty excluding reruned tests(it is not depends on featrure).
command to run: mvn clean velify
@DzmitryNev that doesn't sound related to this issue.
Anyway this project integrates cucumber, junit, surefire and maven. From your problem description alone it would be impossible to help you. People might be able to help you if you were to create an Minimal, Complete, and Verifiable example preferably in the form of a github repository.
@mpkorstanje Please see MCVE https://github.com/DzmitryNev/cucumber-empty-report . Please provide your feedback or point out the right way.
Also I see https://stackoverflow.com/questions/50115819/cucumber-json-report-getting-overwritten-by-rerun-scenario-report and apply <parallelScheme>SCENARIO</parallelScheme>
(according to point 2, the MCVE contains it) but it does not work, it overrides not only report of failed scenario.
We are using
Thanks!
The bad news is that the results you're observing are entirely as expected. This is a natural consequence of chaining different tools (Surefire --> JUnit --> Cucumber) that are otherwise unaware of each other. From the perspective of Cucumber it would appear that the rerun is an entirely new execution so it will happily overwrite the old reports. Only at the start of the chain it is possible to create accurate reports. As such your options from least to most effort and worst to best quality are:
- Use the reports generated by Surefire.
- Write your own plugin that appends results rather then overwriting them.
- Get involved with the Cucumber project and help resolve this fundamentally.
@DeChrish you can't. Examples are syntactic sugar to compact multiple scenarios.
@mpkorstanje
1.Use the reports generated by Surefire.
- Write your own plugin that appends results rather then overwriting them.
- Get involved with the Cucumber project and help resolve this fundamentally.
In case of using jenkins Cucumber Reports Plugin or others cucumber report plugins which adapted to standard cucumber json report, the using Surefire report or the creation custom plugin is impossible https://github.com/jenkinsci/cucumber-reports-plugin.
I investigate my issue and came to the conclusion that the reason of my bug is re-init Runner-s of all tests(failed and successes, like cucumber.api.junit.Cucumber) included the tests which is not be failed(it happens every time "re-run"). After that filter(in our case MatchDescriptions, it was added into https://github.com/apache/maven-surefire/pull/150/files ) is applied and exclude success tests from execution flow. After that no one fills reports of excluded tests. As result the report is empty after re-running. I think this is the general issue of "surefire-junit47". (on the screenshot we can take a look the re-run of failed tests: Parallel01IT was fail, Parallel01IT was success)
I think that there are two ways to solve this problem: 1) fix the "surefire-junit47" exclude re-init Runner-s classes from execution flow that was success in last run. 2) change cucumber.runtime.formatter.JSONFormatter. Change constructor that consume path of report and create file of report in phase of generation of content.(I think it is incorrect solution)
Please provide your opinion if it possible. Thanks in advance.
That was the point. Don't use the cucumber tool chain for reporting. Use the surefire one. There are Jenkins plugins for this too. Not as pretty but pretty is expensive.
Neither of your solutions is feasible.
Surefire has to use JUnits filter mechanism. For this it has to re-instantiate the runners. Anything else would be making assumptions about the state of the runners. Not all runners may behave the same on a second run.
Cucumber doesn't know if it is a rerun or not. Some times the empty file is correct, sometimes it isn't.
On Wed, May 30, 2018, 23:56 DzmitryNev notifications@github.com wrote:
@mpkorstanje https://github.com/mpkorstanje
1.Use the reports generated by Surefire.
- Write your own plugin that appends results rather then overwriting them.
- Get involved with the Cucumber project and help resolve this fundamentally.
In case of using jenkins Cucumber Reports Plugin or others cucumber report plugins which adapted to standard cucumber json report, the using Surefire report or the creation custom plugin is impossible https://github.com/jenkinsci/cucumber-reports-plugin.
I investigate my issue and came to the conclusion that the reason of my bug is re-init Runner-s of all tests(failed and successes, like cucumber.api.junit.Cucumber) included the tests which is not be failed(it happens every time "re-run"). After that filter(in our case MatchDescriptions, it was added into https://github.com/apache/maven-surefire/pull/150/files ) is applied and exclude success tests from execution flow. After that no one fills reports of excluded tests. As result the report is empty after re-running. I think this is the general issue of "surefire-junit47". [image: cucumberrunner] https://user-images.githubusercontent.com/20165935/40748748-1c9339ea-646a-11e8-8a47-b9a4820df797.png
I think that there are two ways to solve this problem:
- fix the "surefire-junit47" exclude re-init Runner-s classes from execution flow that was success in last run.
- change cucumber.runtime.formatter.JSONFormatter. Change constructor that consume path of report and create file of report in phase of generation of content.(I think it is incorrect solution)
Please provide your opinion if it possible. Thanks in advance.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/temyers/cucumber-jvm-parallel-plugin/issues/169#issuecomment-393332952, or mute the thread https://github.com/notifications/unsubscribe-auth/AGoAZ5VdR8-A0DN_fT6s89G_MTmW6d1Wks5t3xWigaJpZM4T6Soh .
@mpkorstanje
In the second point I mean lazy-json-writer. It is possible to add two json formatters for eager and lazy writing. It is works for maven-failsafe-plugin
and <rerunFailingTestsCount>2</rerunFailingTestsCount>
. It is just the concept of solution.
The changes here: https://github.com/DzmitryNev/cucumber-jvm-parallel-plugin/pull/1
That will work in this specific case while running scenarios in parallel. But it won't work running features in parallel. I can't recommend it as a solution. The only proper way is to go with the surefire output.
Hi All,
Is it possible to retain the json file for the first run and avoid overriding it in re-runs. please suggest.
That will work in this specific case while running scenarios in parallel. But it won't work running features in parallel. I can't recommend it as a solution. The only proper way is to go with the surefire output.
@mpkorstanje Is it possible to retain the json file for the first run and avoid overriding it in re-runs. please suggest.
@mpkorstanje I think it can be useful:
- Write your own plugin that appends results rather then overwriting them.
See the following comment.
You'll have to do it yourself. The downside of open source.
In my project, i am using latest verison [5.0.0] of cucumber-jvm-parallel-plugin and want to use
<parallelScheme>SCENARIO</parallelScheme>
feature.PROBLEM IS that, for a feature file with 5 scenarios in it, this mechanism generates 5 runners but each of those 5 runners will run all 5 scenarios within that feature file and not just 1 scenario. i.e. a scenario gets executed 5 times (once for each of the runner).
My POM.XML looks like this.
Also, i have used following dependencies from
Although the Runner generated as part of above test run, actually shows the correct scenario Line number in its features options but it still executes the whole lot feature file not just that scenario