lookit / lookit-api

Codebase for Lookit v2 and Experimenter v2. Includes an API. Docs: http://lookit.readthedocs.io/
https://lookit.mit.edu/
MIT License
10 stars 18 forks source link

SCOPING: Diff tool for study submission review #139

Open kimberscott opened 6 years ago

kimberscott commented 6 years ago

Pain point: Before studies are actually deployed on Lookit, they have to be approved by an admin, which reserves an opportunity for us to check for compliance with terms of use, help researchers ensure instructions are clear, etc. Studies have to be re-approved after changes are made. But there's currently no way for an admin to tell what has changed, which would allow vastly expedited review in the case where the researcher fixed a typo or changed the age range, allowing us to focus energy on cases where new code has been introduced, etc.

Acceptance criteria: When reviewing a submitted study, a Lookit admin can see what has changed since the last approved version (if there is one) and see a history of actions taken on the study (e.g. edits/state changes). Either when saving changes to a study or when submitting, a researcher can provide a note about the purpose of the changes (like a commit message).

Implementation notes/Suggestions: Changes may have been made to any of the fields on the study model - e.g. purpose, description, title, eligibility criteria, JSON doc, commit SHAs. For all except the JSON doc simply displaying the previous and current versions of any changed fields would be fine. For the JSON doc some sort of actual diff output would be helpful if possible since often changes will be just to a few lines.

kimberscott commented 5 years ago

This would also potentially be a good time to start storing experiment "snapshots" as we do for demographic data, with the snapshot including all study information (description, age range, JSON, commit shas, etc.) and a timestamp. Responses could then reference a snapshot ID for a clear record of what was being run when, for studies where changes were made during data collection (which will, realistically, be most studies - even if just to clarify language or adjust age range as data collection proceeds)

kimberscott commented 4 years ago

Dealing with study re-approval may be a challenge as we scale up with limited staff, so another thing to consider is how to limit the number of requests and/or reduce the impact on researchers waiting for re-approval to be able to test again.

Technical approach: reduce number of types of changes that require re-approval. Some constraints -

Behavioral approaches: establish clear expectations about timeline for approval (e.g. we work through the list at specific times during the week), allow changes during an obviously-marked "piloting" phase and discourage frequent changes after that, ...?

kimberscott commented 3 years ago

In addition to tracking data provenance, versioning study protocols in general would be helpful for allowing researchers to revert to a previously-approved version (recent request)!