Governance team does not currently have a system and/or process in place for hearing directly from VFS teams about their experience with a Design Intent, Midpoint Review and/or Staging Review. While Service Design team does put out a quarterly satisfaction survey for VFS teams to complete (and Governance has seen from past quarters that many VFS team members will comment and share their experience with the Collaboration Cycle), the quarterly satisfaction survey does not allow Governance to easily track feedback over time. Governance team would also like to know 1) which touchpoint the feedback is relevant to, 2) was there an impacted practice area, 3) is there a relation bw the number of issues documented, etc. We would like to be able to capture metrics to better understand how the Collaboration Cycle is working, and be able to make needed improvements.
Ideas:
Zoom poll (to narrow down which touchpoints may require more research)
Impacted Artifacts
Tasks
[ ] Placeholder task
Peer Review
To be completed by peer reviewer
[ ] User story and acceptance criteria are met
Acceptance Criteria
[ ] Placeholder AC
How to prepare this issue
Refinement
[ ] Ticket has user story, description, tasks, and acceptance criteria
@humancompanion-usds this is a ticket I opened per our discussion at our 5/12 1:1. Please feel free to comment with your thoughts and recommendations. Thanks!
User Story
"As a (persona), I (want to), (so that)."
Assignee: Peer Reviewer:
Description
Governance team does not currently have a system and/or process in place for hearing directly from VFS teams about their experience with a Design Intent, Midpoint Review and/or Staging Review. While Service Design team does put out a quarterly satisfaction survey for VFS teams to complete (and Governance has seen from past quarters that many VFS team members will comment and share their experience with the Collaboration Cycle), the quarterly satisfaction survey does not allow Governance to easily track feedback over time. Governance team would also like to know 1) which touchpoint the feedback is relevant to, 2) was there an impacted practice area, 3) is there a relation bw the number of issues documented, etc. We would like to be able to capture metrics to better understand how the Collaboration Cycle is working, and be able to make needed improvements.
Ideas: Zoom poll (to narrow down which touchpoints may require more research)
Impacted Artifacts
Tasks
Peer Review
To be completed by peer reviewer
Acceptance Criteria
How to prepare this issue
Refinement
Planning
If this ticket is picked up from the Backlog mid-sprint, connect with Shira to ensure the below items are completed correctly