Open jessb0t opened 1 year ago
After meeting, ja/dh discussed that there should be no duplicate column names across the REDCap files. To be safe, building a check that throws an error if it sees a duplicate.
Note from JA on Slack on 6/5: We need to manipulate variable names when we set up hallMonitor for the first time. We will need to map consent_es_complete, which is what is in REDCap, to consentescomplete. In this way, David’s script can use the same logic to collapse across English and Spanish versions of REDCap consent data (based on whether the last two letters before the are “es”) in the same way he is doing for the survey data.
For the "arrow-alert-vN" psychopy tasks, including the version number vN in the central tracker datadict when matching up with psychopy task file names and updating central tracker.
For surveys that are given to both parent and child, specifying these surveys during the setup.sh call so that hallMonitor knows to map the child's survey to "[survey-name]_s1_r1_e1" and the parent's survey to something like "[survey-name]_parental_self_report_s1_r1_e1" when updating central tracker.
When a survey has a version number (like masies_b_s1_r1_e1) we can assume it will always be a single letter? Or will it ever be a letter + number or something?
When a survey has a version number (like masies_b_s1_r1_e1) we can assume it will always be a single letter? Or will it ever be a letter + number or something?
@davhunt It will also be a set of letters (single letter until we hit "z", then "aa" and so on, never numbers or anything else).
For backward- (and forward-) compatibility monitoring should be able to run on projects that
These should either be specified by a flag during setup, or the monitoring scripts should detect whether these are true and behave accordingly (if for example it only sees IDs starting with 301, 308 and 309)
:point_up: comment immediately above discussed at meeting of 6/9:
Notes from DH/LL script testing on 6/19:
Notes from DevOps meeting on 6/20:
Next steps: test scripts rigorously (try to break them), then test with real data. After that, work with Emily Martin on setting up data monitoring for oops-faces
Notes from Slack conversation on 6/14:
Up-to-date checklist of THRIVE data monitoring projects:
Notes from meeting 7/10:
Notes from meeting 7/17:
To-do list from meeting 7/19:
Updates from meeting 7/27:
Other to-do items:
Summary from meeting 8/3:
David's to-do list:
My to-do list:
Other points from discussion:
Notes from 8/15 meeting:
upgrade of data monitoring process to handle: