KamitaniLab / bdpy

Python package for brain decoding analysis (BrainDecoderToolbox2 data format, machine learning analysis, functional MRI)
MIT License
33 stars 22 forks source link

nifti to bData #10

Open dingmiaomiao opened 3 years ago

dingmiaomiao commented 3 years ago

bdata is a very convenient format, so I want to convert my nifti dataset to bData.

  1. I read "example_fmriprep.ipynb" in the "examples" file to learn how to do the job. Is input dataset in (https://openneuro.org/datasets/ds001246/versions/1.2.1)?
  2. How can I change the dirs in the code like dat_vol_standard = create_bdata_fmriprep('data/fmriprep-1.2/SB181112/bids/', data_mode='volume_standard', label_mapper={'stimulus_name' : 'data/stim_test.tsv'}) or how should I put the dataset download in (https://openneuro.org/datasets/ds001246/versions/1.2.1)?
  3. Any nifti data (preprocessed by spm) can convert to bData use bdqy? Are there other steps or other filies?
dingmiaomiao commented 3 years ago

My dataset is download from openneuro and file naming is in accordance with BIDS format

ShuntaroAoki commented 3 years ago

Sorry for late response.

example_fmriprep.ipynb demonstrates how to make BData from outputs of fmriprep. The data repository at OpenNeuro doesn't include the fmriprep results but you can just run fmriprep on a dataset formatted as BIDS.

The first arg of create_bdata_fmriprep is path to a BIDS directory that contains fmriprep outputs. You can specify your BIDS dataset. Note that this function is initially designed for our in-house use and not yet rigorously tested for general use. So it may cause errors on public datasets.

Here is example code converting a nifti image into BData. Currently you need to manually convert nifti data; loading the nifti data as an array and adding it to the bdata. Now we are planning to add nifti importing/exporting functionalities in bdpy.

import bdpy
from bdpy.mri import load_mri

fmri_data, xyz, ijk =  load_mri('fmri_data.nii.gz')

 bdata = bdpy.BData()

# Add fMRI data as 'VoxelData')
bdata.ade(fmri_data, 'VoxelData')

# Add voxel coordinates/indexes
bdata.add_metadata('voxel_x', xyz[0, :], 'Voxel x coordinate', where='VoxelData')
bdata.add_metadata('voxel_y', xyz[1, :], 'Voxel y coordinate', where='VoxelData')
bdata.add_metadata('voxel_z', xyz[2, :], 'Voxel z coordinate', where='VoxelData')
bdata.add_metadata('voxel_i', ijk[0, :], 'Voxel i index', where='VoxelData')
bdata.add_metadata('voxel_j', ijk[1, :], 'Voxel j index', where='VoxelData')
bdata.add_metadata('voxel_k', ijk[2, :], 'Voxel k index', where='VoxelData')
dingmiaomiao commented 3 years ago

Thanks, I will try it.

dingmiaomiao commented 3 years ago

Thanks for your answer, I had change my dataset into bdata,but there are some questions:

  1. bdata is defined as "Each row corresponds to a single 'sample', and each column representes either single feature (voxel), target, or experiment design information". But how get the voxels for every stimulate. I didn't see the use of HRF in the code.
  2. If the ROI in https://openneuro.org/datasets/ds001246/versions/1.2.1 is divided by expert? I want to define my ROI(like v1/v2...) in my dataset(Individual space or Standard space)What should i do?
ShuntaroAoki commented 3 years ago

But how get the voxels for every stimulate. I didn't see the use of HRF in the code.

We didn't fit HRF. Instead, we just use BOLD signals that were temporally shifted, normalized, and averaged across volumes within each trial (please see the original papers for the detail of preprocessing). Fitting HRF to obtain trial responses may improve the results, but we haven't tested yet.

If the ROI in https://openneuro.org/datasets/ds001246/versions/1.2.1 is divided by expert? I want to define my ROI(like v1/v2...) in my dataset(Individual space or Standard space)What should i do?

Yes, the visual area ROIs were manually defined by experts based on retinotopical mapping and functional localizer. We haven't released the data (we may add them to the repository in future), and so right now it's impossible to define ROIs functionally. Alternatively, you can use brain atlases or parcellations (e.g., Glasser et al. 2016 HCP percellation) to define anatomical ROIs.

dingmiaomiao commented 3 years ago

Thanks ! I am not sure if I mistake your means. Do you mean : if one image dispalys 3 seconds(0-3s), we can use the averaged fmri voxels values within the trial(0-3s)or use the averaged fmri voxels values after the trial (for example 5-8s,because the bold is later than Neural activity) ? If it means the latter,how long is the delay appropriate? Thank you for your patience, you help me solved my problem.

ShuntaroAoki commented 3 years ago

The latter. In Horikawa & Kamitani (2017) experiment, we averaged fMRI signals from 3 s after stimulus onset (so, 3-12 s after each stimulus/trial onset; stimulus duration was 9 s) to compensate the hemodynamic delay as you pointed out. We shifted 3 s just because the TR of fMRI scanning was 3 s.

jinhanzhang commented 4 months ago

I run the example on GOD BIDS dataset from OpenNeuro but got this error:

AttributeError Traceback (most recent call last) Cell In[8], [line 1] [1] fmri_data, xyz, ijk = load_mri('/vast/xxxxxx/GOD/derivatives/preproc-spm/output/sub-01/ses-perceptionTraining01/func/sub-01_ses-perceptionTraining01_task-perception_run-01_bold_preproc.nii.gz')

File /ext3/miniconda3/lib/python3.12/site-packages/bdpy/mri/load_mri.py:18, in load_mri(fpath) [9]'''Load a MRI image. [10] [11]- Returns data as 2D array (sample x voxel) (...) [14] - Data, xyz, and ijk are flattened by Fortran-like index order [15]''' [16] img = nipy.load_image(fpath) ---> [18]data = img.get_data() [19]if data.ndim == 4: [20] data = data.reshape(-1, data.shape[-1], order='F').T

AttributeError: 'Image' object has no attribute 'get_data'

It seems like the get_data() is deprecated and should be change to get_fdata(), not sure if this is the issue. Please let me know how to fix this or if there is any update to the get_data function in the package.