Closed sameera2004 closed 4 years ago
Can you provide the exact command line you are using and sample path/file names for your T2? Note that the environment and appropriate packages are normally set in the container, so you might have other issues running from the command line like this.
On Jul 12, 2020, at 2:29 PM, Sameera Abeykoon notifications@github.com wrote:
Hi,
I am trying run BIDS_Apps/HCPPipelines to process fMRI data. I am testing to run one subject which have two session and 4 resting state runs. We collected T2w and T1w for both session but I am getting the following error.
File "/mnt/tools/bids_HCP/HCPPipelines/run.py", line 290 f"No T2w files found for sub-{subject_label}. Consider --procesing_mode [legacy | auto ]."
Then I chose the --processing_mode[auto] and still I am getting the same error. However, we want to process this data using --processing_mode[hcp]
Here's the command I used to run HCPPipeline - BIDS_Apps
python /mnt/tools/bids_HCP/HCPPipelines/run.py [--processing_mode {hcp}] [--stages {PreFreeSurfer,FreeSurfer,PostFreeSurfer,fMRIVolume,fMRISurface}] [--coreg {FS}] --license_key $PWD/license.txt [-v] /mnt/fMRIprep/scR21_for_bids_app /mnt/fMRIprep/scR21_HCP_bids_outputs --license_key $PWD/license.txt participant
This input dataset is BIDS compliant.
Can anyone help me to fix this issue?
Thank you Best Regards Sameera
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or unsubscribe.
Hi @rhancockn,
Thank you so much for helping me to fix this issue. Following you can find the T2 files are arranged inside the folder
/mnt/fMRIprep/scR21_for_bids_app ├── dataset_description.json ├── participants.tsv ├── README └── sub-50010 ├── ses-1 │ ├── anat │ │ ├── sub-50010_ses-1_run-1_T1w.json │ │ ├── sub-50010_ses-1_run-1_T1w.nii.gz │ │ ├── sub-50010_ses-1_run-1_T2w.json │ │ └── sub-50010_ses-1_run-1_T2w.nii.gz │ ├── fmap │ │ ├── sub-50010_ses-1_dir-AP_run-1_epi.json │ │ ├── sub-50010_ses-1_dir-AP_run-1_epi.nii.gz │ │ ├── sub-50010_ses-1_dir-PA_run-1_epi.json │ │ └── sub-50010_ses-1_dir-PA_run-1_epi.nii.gz │ └── func │ ├── sub-50010_ses-1_task-RSFC_run-1_bold.json │ ├── sub-50010_ses-1_task-RSFC_run-1_bold.nii.gz │ ├── sub-50010_ses-1_task-RSFC_run-1_sbref.json │ ├── sub-50010_ses-1_task-RSFC_run-1_sbref.nii.gz │ ├── sub-50010_ses-1_task-RSFC_run-2_bold.json │ ├── sub-50010_ses-1_task-RSFC_run-2_bold.nii.gz │ ├── sub-50010_ses-1_task-RSFC_run-2_sbref.json │ ├── sub-50010_ses-1_task-RSFC_run-2_sbref.nii.gz │ ├── sub-50010_ses-1_task-RSFC_run-3_bold.json │ ├── sub-50010_ses-1_task-RSFC_run-3_bold.nii.gz │ ├── sub-50010_ses-1_task-RSFC_run-3_sbref.json │ ├── sub-50010_ses-1_task-RSFC_run-3_sbref.nii.gz │ ├── sub-50010_ses-1_task-RSFC_run-4_bold.json │ ├── sub-50010_ses-1_task-RSFC_run-4_bold.nii.gz │ ├── sub-50010_ses-1_task-RSFC_run-4_sbref.json │ └── sub-50010_ses-1_task-RSFC_run-4_sbref.nii.gz └── ses-2 ├── anat │ ├── sub-50010_ses-2_run-1_T1w.json │ ├── sub-50010_ses-2_run-1_T1w.nii.gz │ ├── sub-50010_ses-2_run-1_T2w.json │ └── sub-50010_ses-2_run-1_T2w.nii.gz ├── fmap │ ├── sub-50010_ses-2_dir-AP_run-1_epi.json │ ├── sub-50010_ses-2_dir-AP_run-1_epi.nii.gz │ ├── sub-50010_ses-2_dir-PA_run-1_epi.json │ └── sub-50010_ses-2_dir-PA_run-1_epi.nii.gz └── func ├── sub-50010_ses-2_task-RSFC_run-1_bold.json ├── sub-50010_ses-2_task-RSFC_run-1_bold.nii ├── sub-50010_ses-2_task-RSFC_run-1_bold.nii.gz ├── sub-50010_ses-2_task-RSFC_run-1_sbref.json ├── sub-50010_ses-2_task-RSFC_run-1_sbref.nii.gz ├── sub-50010_ses-2_task-RSFC_run-2_bold.json ├── sub-50010_ses-2_task-RSFC_run-2_bold.nii.gz ├── sub-50010_ses-2_task-RSFC_run-2_sbref.json ├── sub-50010_ses-2_task-RSFC_run-2_sbref.nii.gz ├── sub-50010_ses-2_task-RSFC_run-3_bold.json ├── sub-50010_ses-2_task-RSFC_run-3_bold.nii.gz ├── sub-50010_ses-2_task-RSFC_run-3_sbref.json ├── sub-50010_ses-2_task-RSFC_run-3_sbref.nii.gz ├── sub-50010_ses-2_task-RSFC_run-4_bold.json ├── sub-50010_ses-2_task-RSFC_run-4_bold.nii.gz ├── sub-50010_ses-2_task-RSFC_run-4_sbref.json └── sub-50010_ses-2_task-RSFC_run-4_sbref.nii.gz
Here's the command I used to process this data.
python /mnt/tools/bids_HCP/HCPPipelines/run.py [--processing_mode {hcp}] [--stages {PreFreeSurfer,FreeSurfer,PostFreeSurfer,fMRIVolume,fMRISurface}] [--coreg {FS}] --license_key $PWD/license.txt [-v] /mnt/fMRIprep/scR21_for_bids_app /mnt/fMRIprep/scR21_HCP_bids_outputs --license_key $PWD/license.txt participant
I am using python 3.5 to run this python script.
Sameera
Hi @rhancockn,
I have two other questions for you.
(1) I pulled this Docker container (https://hub.docker.com/r/bids/hcppipelines/). Is it the newest docker container(BIDS App wrapper for HCP Pipelines v4.1.3.)? Then I created to singularity image because docker is not free on RedHat and I want to run it on our HCP cluster too.
I used following command to create singularity image.
singularity build /mnt/tools/bids_HCP/bids-HCP4.1.3.simg docker:bids/hcppipelines
(2) What is the Freesurfer version you are using with this BIDS App wrapper for HCP Pipelines v4.1.3.?
Thank you Best Regards Sameer
Hi @sameera2004, The wrapper script is not compatible with Python 3.5 or earlier (one reason it is a good idea to use the container with all the correct dependencies).
You can pull the current v4.1.3 Docker container from rhancock/hcpbids
, which contains FreeSurfer 6.0.1 and other requirements.
The command line you are using should not have []
or {}
—these are used in the documentation to denote optional arguments and possible values, respectively, and should not actually appear in the command line.
You can process your dataset with the command:
docker run --rm -v /mnt/fMRIprep/scR21_for_bids_app:/bids \
-v /mnt/fMRIprep/scR21_HCP_bids_outputs:/out \
rhancock/hcpbids \
/bids /out participant \
--coreg FS \
--processing_mode hcp \
--license_key "XXXXX"
or the equivalent singularity command. Note that --coreg MSMSulc
is the default and the preferred option, so you should use that unless you have a specific reason to want FreeSurfer registration only.
Hi @rhancocknhttps://github.com/rhancockn,
I will try to run it in this way and will let you know about the outcome.
I have another question for you.
We have a resting state data set that we want to process using HCP pipeline. This data set contain 20 subjects, 2 sessions for each subject, and 4 runs for each session. We collected T1w and fieldmaps(both B0 and SpinEcho) for all the subjects all the sessions. We also collected T2w data for all the subjects first session. However, we didn’t collect T2w for some of the subjects second session.
Do you think I can use this BIDS-APPS/ HCPPipleine to process this data? Or do you need T2w data for both session to use this BIDS-APPS/ HCPPipeline? Can I copy first session T2w data to second session T2w data where ever T2w are missing and run this BIDS-APPS/HCPPipeline?
Thank you Best Regards Sameera
With --processing_mode auto
, the BIDS app will try to run the HCP style pipeline whenever possible and fall back to legacy style processing if required files are missing (there will be a message in the logs if this is the case). So processing the entire dataset with --processing_mode auto
would result in legacy style processing for those subjects without T2w data and HCP style processing for the rest. This is not necessarily the ideal approach-since the T2w data is used for bias correction of both structural and functional data, it might be preferable to specify --processing_mode legacy
for the structural steps to avoid introducing biases into your data. Copying the first session data is probably unadvisable since the bias field will be slightly different across sessions. The question of which processing approach is optimal for your dataset is not really related to the use of the BIDS app though, and can be better answered on the hcp-users list.
Hi,
I am trying run BIDS_Apps/HCPPipelines to process fMRI data. I am testing to run one subject which have two session and 4 resting state runs. We collected T2w and T1w for both session but I am getting the following error.
File "/mnt/tools/bids_HCP/HCPPipelines/run.py", line 290 f"No T2w files found for sub-{subject_label}. Consider --procesing_mode [legacy | auto ]."
Then I chose the --processing_mode[auto] and still I am getting the same error. However, we want to process this data using --processing_mode[hcp]
Here's the command I used to run HCPPipeline - BIDS_Apps
python /mnt/tools/bids_HCP/HCPPipelines/run.py [--processing_mode {hcp}] [--stages {PreFreeSurfer,FreeSurfer,PostFreeSurfer,fMRIVolume,fMRISurface}] [--coreg {FS}] --license_key $PWD/license.txt [-v] /mnt/fMRIprep/scR21_for_bids_app /mnt/fMRIprep/scR21_HCP_bids_outputs --license_key $PWD/license.txt participant
This input dataset is BIDS compliant.
Can anyone help me to fix this issue?
Thank you Best Regards Sameera