Closed sschulz92 closed 1 month ago
I'm not deeply familiar with the implementation but I think what you've done should work.
Are you saying that functionally it skips the scenario as you'd expect, but it doesn't show up as skipped in the overall report statistics? or it shows up as skipped from a scenario perspective in the counts, but the individual step which issued the programmatic skip isn't marked as such?
How to test this kind of processing? I was unable to find a great way to test that this exception is raised and handled correctly.
Other than unit testing the expected results when exception raised, the "proper" way would have been for us to add such a scenario into one of the Gauge functional tests (a Java Gauge project is used to test Gauge and various plugins together), and enable it for certain languages using the tags:
e.g similar to https://github.com/getgauge/gauge-tests/blob/master/specs/advanced_topics/continue_on_failure/continue_on_failure.spec
I'm also not deeply familiar with these but believe the Java code at https://github.com/getgauge/gauge-tests/blob/master/src/test/java/com/thoughtworks/gauge/test/common/PythonProject.java would need to be enhanced to tell the tests how to generate a step that throws your exception etc. Sadly this wasn't done for dotnet/csharp so no pattern to follow :-/
For the basic unit tests, we should be able to add a test step impl that throws your exception after after https://github.com/getgauge/gauge-python/blob/1f70365d4ca505b42ee8ff0c631b13d8eeccd1de/tests/test_processor.py#L223-L238
@sschulz92 Thank you for contributing to gauge-python. Your pull request has been labeled as a release candidate 🎉🎉.
Merging this PR will trigger a release.
Instructions to bump the version can found at CONTRIBUTING.md
If the CONTRIBUTING.md file does not exist or does not include instructions about bumping up the version, please looks previous commits in git history to see what changes need to be done.
Hello @PiotrNestor - would you care to weigh in on whether you feel this is the right way to approach extending your feature to Python?
If you can comment on whether the step that threw the exception is expected to be displayed/highlighted in the report (possibly not on current impl which is at scenario level?) that'd also be useful.
I'm not deeply familiar with the implementation but I think what you've done should work.
Are you saying that functionally it skips the scenario as you'd expect, but it doesn't show up as skipped in the overall report statistics? or it shows up as skipped from a scenario perspective in the counts, but the individual step which issued the programmatic skip isn't marked as such?
I can see the statistics being absolutely correct so the scenarios are counted as "skipped" though the step with the SkipScenarioException is shown as passed. The following steps are - correctly - shown as not executed.
I'm not deeply familiar with the implementation but I think what you've done should work. Are you saying that functionally it skips the scenario as you'd expect, but it doesn't show up as skipped in the overall report statistics? or it shows up as skipped from a scenario perspective in the counts, but the individual step which issued the programmatic skip isn't marked as such?
I can see the statistics being absolutely correct so the scenarios are counted as "skipped" though the step with the SkipScenarioException is shown as passed. The following steps are - correctly - shown as not executed.
Ahh right. I think that might be expected, as this feature rode on top of earlier Gauge capability to skip a scenario declaratively, and I dont see any changes made to the html-report plugin to specifically accommodate this. There is some existing skipped
flag in the StepExecutionResult
(higher level) however the newer SkipScenario
flag in the ExecutionResult
that comes from plugins does not seem to be used anywhere in the html-report.
if you'd like to suggest a change for this in the report, I think that's reasonable? I think we'd just need to change https://github.com/getgauge/html-report/blob/19ee5a6ba70c454958bcdffa9b62c9675d20dabf/generator/transform.go#L619-L630 and to consider whether we need to distinguish visually between a step that programmatically skips, vs other reasons a step could be skipped.
Given this works, I'm going to err on the side of merging this and proceed. It's easy to follow up on the gauge-wide functional tests or the html-report after merging anyway, so let's do this 👍
With this PR I would like to introduce the feature to skip scenarios at runtime, related to this issue: https://github.com/getgauge/gauge-python/issues/390
I've two question though:
How to test this kind of processing? I was unable to find a great way to test that this exception is raised and handled correctly. How can I set the step result to skipped=true? At the moment the feature works fine, but the step with the SkipScenarioException is shown as "passed" in the HTML report.
PS: Now with signed commit .. 😏