Open HarishTreuFoundry opened 1 month ago
may be you can use special tags link here E.g. @fixme (or) @skip If the above tags are mentioned the 'npx bddgen' will not validate the step implementations for that particular feature/scenario
Agree with @thulasipavankumar, it's better to mark these incomplete scenarios/features with @fixme
tag.
Otherwise, we will miss cases when step text of working scenario was changed and not matched to any implementation.
@vitalets thanks for giving attention. The counter side is, we need to manage this at test scenario level not at entire config level. This will become too much overhead to add and remove tags if we add/modify lot of tests. The risk is also that, there is a human factor i.e. if we miss to remove the tag, then the test will not execute and lay lead to defect leakage.
If the implementation is not there, the test can fail which is better experience in my opinion
If the implementation is not there, the test can fail which is better experience in my opinion
How to handle these constantly failing tests in the pipelines? Moreover, if retries configured to > 1, these tests will be retried that is waste of resources.
I think filtering on config level is also good approach. There is no built-in option for that in Cucumber (that is used to process files in playwright-bdd), but there is workaround. For example, if all not ready features are in features/not-ready
dir, then they can be filtered out manually:
import fg from 'fast-glob';
const testDir = defineBddConfig({
paths: fg.sync(['features/**/*.feature', '!features/not-ready']),
...
});
If the implementation is not there, the test can fail which is better experience in my opinion
How to handle these constantly failing tests in the pipelines? Moreover, if retries configured to > 1, these tests will be retried that is waste of resources.
I think filtering on config level is also good approach. There is no built-in option for that in Cucumber (that is used to process files in playwright-bdd), but there is workaround. For example, if all not ready features are in
features/not-ready
dir, then they can be filtered out manually:import fg from 'fast-glob'; const testDir = defineBddConfig({ paths: fg.sync(['features/**/*.feature', '!features/not-ready']), ... });
I too thought about moving such files to a different folder. In real project scenario, we will add new steps to existing scenario or new scenario to existing feature. So handling at feature level is not possible.
Then we can introduce new option for that - skipScenariosWithMissingSteps
. When it's true
, scenarios with missing steps will be automatically marked as skipped. Would it be helpful?
IMO, the test can be failed as well if its missing the implementation. So to give highest caution to implement or test cover manually. Having an option to avoid validation for missing step while generating files will be lot helpful to save time handling lot of tags
If I may add my two cents: we should apply the same functionality as CucumberJS. They fail the test, report the step as undefined and skip the rest of the scenarios steps. People come here with a certain understanding how tests written in Gherkin are executed. There is not that much value in deviating from that.
Output of CucumberJS when encountering an undefined step
Failures:
1) Scenario: Say hello # features/greeting.feature:3
✔ When the greeter says hello # features/step_definitions/steps.ts:10
? And I try to use a non-existent step
Undefined. Implement with the following snippet:
When('I try to use a non-existent step', function () {
// Write code here that turns the phrase above into concrete actions
return 'pending';
});
- Then I should have heard "hello" # features/step_definitions/steps.ts:14
1 scenario (1 undefined)
3 steps (1 undefined, 1 skipped, 1 passed)
Sorry for late reply! I agree we should try to follow Cucumber conventions where possible. Just want to highlight that for Playwright having tests that are knowingly failing is anti-performant: worker will be re-created on every such test (with all worker scoped fixtures!) and retries will be involved as well.
Anyway, a separate option for that is good solution. I see it this way:
New config option missingSteps
with the following values:
break
- breaks scenarios generation if there are missing steps (default, current behavior)skip
- scenarios with missing steps are generated and marked as skippedfail
- scenarios with missing steps are generated and fail during test run
The problem Our product owners write the gherkin before our testers automate. We dont want to wait for implementation before the merge.
A solution can the validation be controlled/switched off?