Open DominiqueMakowski opened 1 year ago
Excellent! Thank you for getting the ball rolling! This will surely help CNeuroMod accelerate processing, validating and sharing the phys data.
We were thinking of adding the physiology to our "alpha" release sometime mid-august with BIDSified raw data. That would facilitate data access.
I'm making myself a note to look into the most efficient way for you @DominiqueMakowski to get access. Either through a DTA or using the three subjects' fully open data through CONP. CONP would be more efficient at the expense of missing three subjects (out of 6). Not everyone agreed yet.
PPG is certainly of stable quality, but we could perhaps add our ECG as a bonus if everything goes according to plan?
Super down to look into other clustering algorithms as well (other than UMAP, PCA and t-SNE?). Also, do I recall correctly that you were running the statistical analysis using R? If yes, I will need a run-through cause I'm totally not familiar.
In terms of meeting, I have a feeling we could probably push that at least until I get data access settled? So next month probably.
Let's Marie Ève Picard to the loop, and Basile Pinsard perhaps? (He's the data manager)
Peace my friends.
Thank you for including me in this discussion, very excited!
I think having a framework to load a meta-database would be a great resource for future analyses, including quality control - I wonder if we could adapt existing code used to evaluate ECG with open data e.g.: https://github.com/DeepPSP/torch_ecg/tree/0caadc007041e9b4b84f536b4b73000105c0806c/torch_ecg/databases
hi @sangfrois and @danibene Excited to be working together to revive and advance this project!
We could schedule a meeting to refresh ourselves and the new potential collaborators about what we've done so far, and lay out the work ahead
Second this!
I also added reviewer's comments for the previous submission so that we can address in this revision
I think having a framework to load a meta-database would be a great resource for future analyses
We currently have the data/
folder in NK with the code to download most of the databases used so far.
However, it could be cool to somehow assemble that in like a function that can download databases locally using the same interface and tidy them up so that we can easily loop through them, load the signals, get some metainfo (sampling rate, database name, reference, recording condition (e.g., resting state, task, ambulatory, sleep...), participants (healthy young, ...)), extract the stuff we want (in our first case, HRV indices), and eventually delete the database after usage.
import NeuroKit2 as nk
def extract_stuff(db_path=""):
meta_info = read_csv(db_path + "/path")
...
return indices
# ---- workflow example
for db in ["fantasia", "mit", "ludb", "neuromod"]:
nk.database_download(db)
extract_stuff(db)
nk.database_remove(db)
Note: at least for openneuro data, it's unfortunately not possible for now to stream participant by participant and do the processing in memory, hence the above proposition of having separate functions to download and delete (critical for someone like me who likes to live with only a few MOs left on my hardrive).
So after OHBM and meeting some people from the community, we should really push this forward.
I am tagging @danibene who expressed interest in helping out here and @sangfrois might find useful stuff as well.
Roadmap for paper
Side projects / side papers (?)
We could schedule a meeting to refresh ourselves and the new potential collaborators about what we've done so far, and lay out the work ahead