From reviewing the code in brainLM_toolkit.py and brainLM_Toolkit.ipynb, it looks like you first save .dat files with the shape (num_TRs, num_parcels), and then apply different normalization schemes when converting them to Arrow format.
I'm a bit confused by the terms "all_voxel" or "per_voxel" used in the code, as you're working with parcellated .dat files at that stage, so there shouldn't be any voxels involved (since voxel = volumetric pixel).
Is this just a naming convention, or am I missing something?
From reviewing the code in brainLM_toolkit.py and brainLM_Toolkit.ipynb, it looks like you first save .dat files with the shape (num_TRs, num_parcels), and then apply different normalization schemes when converting them to Arrow format.
I'm a bit confused by the terms "all_voxel" or "per_voxel" used in the code, as you're working with parcellated .dat files at that stage, so there shouldn't be any voxels involved (since voxel = volumetric pixel).
Is this just a naming convention, or am I missing something?
Thanks!