Closed F-said closed 2 years ago
maybe briefly discuss the file positions / names of reliability.py
,postprocessing.py
and run_metrics.py
discuss idea of upcoming Pepper retreat/hackathon to get us to pre-release. And, a second one to get us to v.01 release. George is paying for pizza and Mountain Dew!
Review/discuss the updated scope of PEPPER: Not a "pipeline" but a "framework for community-driven, reproducible EEG analysis across age/context/system" -modular, reproducible, "living pipeline" at center -EEG4p to facilitate rapid benchmarking and identification of parameter sets for different ages/contexts/systems -Intentionally designed infrastructure for collaborative development and use. To include standards for how to ensure reproducibility (container-based tests), and checks on code development (required prs, automatic tests run at commit), as well as consideration given to governance, credit/authorship, licensing.
Discussion of timeline for pre/full release(s) and what to include.
-Timeline for release .01 and minimal requirements: -all of pre-release, but also need to demonstrate that validation metrics produce at least comparable results to Matlab-MADE. -ideally, an alpha version of EEG4p is also included, and this should be a minimal additional requirement, given that a primary requirement of the .01 release is demonstrating validation results via metrics. I.e., since we have to calculate these, might as well put in a slight bit more work to set up in a sustainable/scalable fashion. Focus should be on implementing structure well for a limited set of metrics, as opposed to quick and dirty computation of many metrics -White paper must be revised/updated, and crucially, must now include results of validation metrics and description of eeg4p as well -revise/update documentation on github, and, initial standalone website for EEG4p that is linked to from NDCLab wiki (BUT, pepper is its own thing and independent from the NDCLab).
Organization for bEEG4p -file positions/names of reliability.py, postprocessing.py and run_metrics.py -discuss whether it is viable to have standard input/output for "measures" and "metrics/stats" -where to place benchmarking datasets
Sonya comments on what is/is not intuitive about use of github, etc.
Discussion of results of 100-sub run of pipeline (errors, compute time, etc) Discussion of ERPs computed on 100-sub run Discussion of reliability metrics computed on 100-sub run; SME too??
Next steps and reliable timeline.
Recruitment needs: -coding -eeg knowledge -documentation overhaul -usability/design -writing/marketing -website design
Updates
Blocks/Challenges
368 set_ref
370 ica_raw
Next Actions