As a content author, I would like to define a test specification for my interviews, so I may feel confident they behave at I intend without extensive manual testing.
(As a developer, I would like to define assertions for our partner's real-world forms, so as we iterate quickly on feature development, we can feel confident we are not breaking behavior in our primary deliverable.)
Open questions:
What kind of guarantees would a content author be looking for?
What is an appropriate level of granularity for user-generated test specifications?
What would an assertion look like?
What kind of fixtures are required to set up a test case? Is mock user-entered data enough?
Acceptance criteria:
[ ] A "no code" interface to define test cases exists
[ ] Test scenarios may be run in-browser, with user-friendly pass/fail notifications
[ ] The main form builder interface communicates the pass/fail status of the form
[ ] Form test specs should not be dependent on form backend features (should work with multiple implementations)
Overview
As a , I would like , so that I can _.
Context
Optional: Any reference material or thoughts we may need for later reference, or assumptions of prior or future work that's out of scope for this story.
[ ]
Acceptance Criteria
Required outcomes of the story
[ ]
Research Questions
Optional: Any initial questions for research
Tasks
Research, design, and engineering work needed to complete the story.
[ ]
Definition of done
The "definition of done" ensures our quality standards are met with each bit of user-facing behavior we add. Everything that can be done incrementally should be done incrementally, while the context and details are fresh. If it’s inefficient or “hard” to do so, the team should figure out why and add OPEX/DEVEX backlog items to make it easier and more efficient.
[ ] Behavior
[ ] Acceptance criteria met
[ ] Implementation matches design decisions
[ ] Documentation
[ ] ADRs (/documents/adr folder)
[ ] Relevant README.md(s)
[ ] Code quality
[ ] Code refactored for clarity and no design/technical debt
[ ] Adhere to separation of concerns; code is not tightly coupled, especially to 3rd party dependencies; dependency rule followed
[ ] Code is reviewed by team member
[ ] Code quality checks passed
[ ] Security and privacy
[ ] Automated security and privacy gates passed
[ ] Testing tasks completed
[ ] Automated tests pass
[ ] Unit test coverage of our code >= 90%
[ ] Build and deploy
[ ] Build process updated
[ ] API(s) are versioned
[ ] Feature toggles created and/or deleted. Document the feature toggle
[ ] Source code is merged to the main branch
Decisions
Optional: Any decisions we've made while working on this story
As a content author, I would like to define a test specification for my interviews, so I may feel confident they behave at I intend without extensive manual testing.
(As a developer, I would like to define assertions for our partner's real-world forms, so as we iterate quickly on feature development, we can feel confident we are not breaking behavior in our primary deliverable.)
Open questions:
Acceptance criteria:
Overview
As a , I would like , so that I can _.
Context
Optional: Any reference material or thoughts we may need for later reference, or assumptions of prior or future work that's out of scope for this story.
Acceptance Criteria
Required outcomes of the story
Research Questions
Tasks
Research, design, and engineering work needed to complete the story.
Definition of done
The "definition of done" ensures our quality standards are met with each bit of user-facing behavior we add. Everything that can be done incrementally should be done incrementally, while the context and details are fresh. If it’s inefficient or “hard” to do so, the team should figure out why and add OPEX/DEVEX backlog items to make it easier and more efficient.
/documents/adr
folder)README.md
(s)Decisions