As a developer, I need to know which data and metadata is going to be accessed from this release so I can write tests to ensure I am searching and retrieving the correct data.
As a researcher with a keyboard, I need to be able to search metadata of the same version so I don’t miss any data from this release
As a wrangler, I need to be able to search for data set that I have pushed into the system to view the outputs of secondary analysis
Questions to resolve
Gaps that we need alignment on before we can write up more search stories:
By the end of Q2, are we going to provide some ad-hoc set of normalized metadata/data for a data consumer? There are three options (possibly more):
(1) No, if a consumer searches the DCP to retrieve data, they will get the latest bundle-specific metadata version. It will be up to the user to resolve this across data sets.
Yes, the DCP will select a subset of data sets that we want at the same metadata schema version.
We will do a version standardization either by:
(2) Automated migration OR
(3) Freeze the metadata version and re-ingest the data
What does it mean to size here? We want to communicate what we think we can do, but not sure how to compare against other teams
Need
User stories:
Questions to resolve
Gaps that we need alignment on before we can write up more search stories: