Closed norbertlaposa closed 3 weeks ago
Didn't we agree that we will use .md
files? The test will need to be human readable, so Markdown seem perfect. I described it here.
I forgot about it. Markdown is fine, we just need to be able to parse it and render a UI out of it. Let's agree on the particular structure. Can you provide a suggestion for the test above?
I would probably use something as basic as the following:
Visit [https://supervalu.ie/](https://supervalu.ie/)
- [ ] After landing on the homepage a **Cookie Consent Dialog** must pop up.
- [ ] **Reject All** button must be displayed.
- [ ] **Accept All** button must be displayed.
- [ ] **Cookie Settings** link must be displayed and enabled.
- [ ] **Cookie Policy** link must displayed displayed and enabled.
Which would render in a Test Description area in our UI as follows:
Visit https://supervalu.ie/
Once the tester ticks all the checkboxes, then the test is considered successful. The tester would be taken to another test in the sequence.
The tester should be also able to fail the test using a separate 'Fail' button. In that case the test result should include how many checkboxes were ticked (=tests passed), e.g. 4/5, and some description of the failure.
If I'm the one supposed to use & write these, I'd pref to have it similar to the other tools we were using before (Azure, TCL) but simplified as much as we can & actually working.
Usually it goes like this
Test name: blah Test suite : 01_blah
10 | Click on button | Login displayed | - [ ]
..
..
Result: failed Issue:
We can extend this (screenshots, priority, another metadata like build number/branch/version, user-related data, test case state, created/updated and many more), but having this at the beginning is imo sufficient.
Could be MD based, but I'd pref to not having a MD file as an entry point. These should be generated via web UI to ensure the formatting is preserved & handled programmatically. Basically the MD files should not be touched & I can see a relational data structure there soo.... Lets consider storing test cases into DB with export possibilities, please.
Let's try YAML, I will show an example ASAP.
Hi @MartinMiksovskyLaposa , @hugo187 ,
I suggest this YAML:
name: 01 cookie consent
steps:
- name: Use anonymous/no cache browser mode
instruction: Visit https://supervalu.ie/
check:
- item: User lands on Supervalu homepage
- item: Cookie consent dialog pops up
- name: Check the Cookie Consent dialog
instruction: Check the Cookie Consent dialog
check:
- item: Reject All button is displayed, enabled
- item: Accept All button is displayed, enabled
- item: Cookie Settings link is displayed, enabled
- item: Cookie Policy link is displayed & enabled
I think it's good. I am suggesting few changes:
description
field (for additional details and remarks).action
instead of instruction
.input
for any user input (such us entered URLs, inputted field values, tapped buttons).result
instead of check
.item:
in the result
list.must
instead of should
)The below example also shows:
name: 01 Login Functionality Test
description: Verify that a user can log in with valid credentials.
steps:
- action: Navigate to the login page.
input: https://supervalu.ie
result: Login page must load successfully.
- action: Enter valid username and password.
input:
username: testuser
password: securepassword
result: Username and password fields must be filled correctly.
- action: Submit the details
input: Click the 'Login' button
result: User must be logged in and redirected to the dashboard.
- action: Verify user is on the dashboard page.
result: >
User must see the dashboard page, including a welcome message
and navigation menu options.
name: 02 Logout Functionality Test
description: Verify that a user can log out successfully.
steps:
- action: Navigate to the dashboard page.
result: User must be on the dashboard.
- action: Click on the logout button.
result: User must be logged out and redirected to the login page.
- action: Verify logout success.
result:
- User must see the login page.
- User must not have access to the dashboard.
Hi @hugo187 , looks good.
Can you please update the two example files with your suggestions under this branch https://github.com/laposa/musgrave-supervalu/tree/80-choose-format-for-description-of-manual-tests
Looks good to me. I'll prepare some kind of local generator for this if we are about to skip the UI part, just to avoid typos & mistakes/incorrect tabbing.
Good UI for writing YAML files is VS Code, it will show you most of the errors.
Can we create a schema and command line validator (via this for example)?
The schema is really simple and easy to remember: name, steps -> action/input -> result
The scheme check could be done on pull request and in test portal. The test portal would process only those with valid schema and show an error if there will be an invalid one.
Should I take care of the validator?
Yes please. I create new issue for it https://github.com/laposa/musgrave-supervalu/issues/1293
We have currently manual tests described as a plain comment, for example https://github.com/laposa/musgrave-supervalu/blob/master/tests/web/tests/01_cookie_consent/001_user_has_cookie_consent_displayed_on_first_visit.cy.ts
We need to convert it to a structured format in order to be able to consume them by the test portal. It can be something like JSON or custom JS functions, similar to Mocha tests.
001_user_has_cookie_consent_displayed_on_first_visit.cy.ts: