Closed smeisler closed 1 year ago
This is weird. The fact that subjects_dir
is <undefined>
makes no sense.
Just to confirm, you're using a fresh scratch directory and not pre-run FreeSurfer?
Yes, no previous outputs and fresh working directory. What's also weird is that sMRIPrep ran fine on these subjects before (0.8.1 , so FS v6). To be clear these outputs are not available for the run I am doing currently, and I am having fMRIPrep do FS 7.2 from scratch.
Are you able to replicate this by running sMRIPrep 0.9.0?
I am having the same issue using Docker and cannot get past it (I only have 1 subjects worth of data at the moment).
My command
docker run --mount type=bind,source=${workdir},target=/workdir nipreps/fmriprep \
--participant-label sub-001 \
--fs-license-file /workdir/code/license.txt \
--work-dir /workdir/work \
--stop-on-first-crash \
--fs-no-reconall \
-vvv \
--omp-nthreads 16 \
/workdir/bids /workdir/bids/derivatives/fmriprep participant
Crash file from the log output
Node: fmriprep_22_0_wf.single_subject_001_wf.anat_preproc_wf.surface_recon_wf.autorecon1
Working directory: /workdir/work/fmriprep_22_0_wf/single_subject_001_wf/anat_preproc_wf/surface_recon_wf/autorecon1
Node inputs:
FLAIR_file = <undefined>
T1_files = <undefined>
T2_file = <undefined>
args = <undefined>
big_ventricles = <undefined>
brainstem = <undefined>
directive = autorecon1
environ = {}
expert = <undefined>
flags = <undefined>
hemi = <undefined>
hippocampal_subfields_T1 = <undefined>
hippocampal_subfields_T2 = <undefined>
hires = <undefined>
mprage = <undefined>
mri_aparc2aseg = <undefined>
mri_ca_label = <undefined>
mri_ca_normalize = <undefined>
mri_ca_register = <undefined>
mri_edit_wm_with_aseg = <undefined>
mri_em_register = <undefined>
mri_fill = <undefined>
mri_mask = <undefined>
mri_normalize = <undefined>
mri_pretess = <undefined>
mri_remove_neck = <undefined>
mri_segment = <undefined>
mri_segstats = <undefined>
mri_tessellate = <undefined>
mri_watershed = <undefined>
mris_anatomical_stats = <undefined>
mris_ca_label = <undefined>
mris_fix_topology = <undefined>
mris_inflate = <undefined>
mris_make_surfaces = <undefined>
mris_register = <undefined>
mris_smooth = <undefined>
mris_sphere = <undefined>
mris_surf2vol = <undefined>
mrisp_paint = <undefined>
openmp = 8
parallel = <undefined>
steps = <undefined>
subject_id = recon_all
subjects_dir = <undefined>
talairach = <undefined>
use_FLAIR = <undefined>
use_T2 = <undefined>
xopts = <undefined>
Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
result["result"] = node.run(updatehash=updatehash)
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 524, in run
result = self._run_interface(execute=True)
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 642, in _run_interface
return self._run_command(execute)
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 750, in _run_command
raise NodeExecutionError(
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node autorecon1.
RuntimeError: subprocess exited with code 1.
Exact same issue for me. Looking for a solution! :/
is there any information in the log file produced by freesurfer? (found in <subjects-dir>/<subject>/scripts/recon-all.log
)
Weirdly it seems to be a log file from another calculation/person/run... I'm not sure to understand so here is mine: """ Tue Apr 28 01:24:00 EDT 2009 /autofs/space/minerva_001/users/nicks/dev/distribution/average/fsaverage /space/minerva/1/users/nicks/freesurfer/bin/recon-all -s fsaverage -cortparc subjid fsaverage setenv SUBJECTS_DIR /autofs/space/minerva_001/users/nicks/dev/distribution/average FREESURFER_HOME /space/minerva/1/users/nicks/freesurfer Actual FREESURFER_HOME /autofs/space/minerva_001/users/nicks/freesurfer build-stamp.txt: freesurfer-x86_64-redhat-linux-gnu-dev4-20090216 Linux minerva 2.6.9-78.0.13.ELsmp #1 SMP Wed Jan 14 15:55:36 EST 2009 x86_64 x86_64 x86_64 GNU/Linux cputime unlimited filesize unlimited datasize unlimited stacksize 10240 kbytes coredumpsize unlimited memoryuse unlimited vmemoryuse unlimited descriptors 1024 memorylocked 32 kbytes maxproc 65536
total used free shared buffers cached
Mem: 7403128 6807536 595592 0 133040 6185568 Swap: 16386292 224 16386068
######################################## program versions used [...] """
It continues for a lot longer so I attached the complete recon-all.log file.
At least on my end (using fMRIPrep), a <FSsubjects-dir>/<subject>
directory was not even created. The fsaverage
directory is there.
Yeah me too in fact, this file is inside fsaverage/scripts folder.
And just to be clear, I said it looks like a run from "someone else" because the linux setup, SUBJECTS_DIR, FREESURFER_HOME, memory available, etc. are not mine... which confuses me a lot.
Could this be a --cleanenv
issue, with external FREESURFER_HOME
overriding our settings?
--cleanenv is not a recognized argument for smriprep-docker
--cleanenv is not a recognized argument for smriprep-docker
--cleanenv
is a singularity option (not a smriprep/fmriprep argument) which makes sure none of your local environmental variables are brought into the container.
--cleanenv is not a recognized argument for smriprep-docker
--cleanenv
is a singularity option (not a smriprep/fmriprep argument) which makes sure none of your local environmental variables are brought into the container.
Yes, sorry for the mistake. I am using Docker so I don't have this option. :)
At least on my end (using fMRIPrep), a
<FSsubjects-dir>/<subject>
directory was not even created. Thefsaverage
directory is there.
Same on my end too that the directory was not created.
Ok, looking at the log a little closer, autorecon1 instantly fails:
[Node] Executing "autorecon1" <smriprep.interfaces.freesurfer.ReconAll>
220528-11:48:40,992 nipype.workflow INFO:
[Node] Finished "autorecon1", elapsed time 0.77419s.
1) Shell into the container (either with the --shell
option on the docker wrapper, or singularity shell
), being sure to include all the original mounts
2) Running the command located in the file (hopefully it exists)
<workdir>/fmriprep_22_0_wf/single_subject_<subject>_wf/anat_preproc_wf/surface_recon_wf/autorecon1/command.txt
Hi all, just checking in, has anybody been able to follow @mgxd's instructions and get some more information here?
I have not (my scratch directory was in temporary storage that got wiped) but I will work on replicating this.
I haven't tested it yet. How would I implement this for smriprep-docker?
If you keep all the options you previously used, and add in --shell
, you should then be dropped in to an interactive session on the terminal. If the working directory is the same, you can run the following command, making sure to replace <subjectid>
with your subject.
cat /scratch/smriprep_wf/single_subject_<subjectid>_wf/anat_preproc_wf/surface_recon_wf/autorecon1/command.txt | bash
Ok thank you @mgxd ! :) I'm currently on a work trip, so I'll test it out ASAP next week when I'll comeback.
Thanks for looking into this everyone. I found that I no longer get this error when I use the fmriprep-docker wrapper instead of the command I was using above using docker directly. I am not sure why this worked. I wasn't sure how to shell in when I wasn't using the wrapper (--shell wasn't a recognized argument).
I wasn't sure how to shell in when I wasn't using the wrapper (--shell wasn't a recognized argument).
You can add --entrypoint=bash
to the docker arguments and remove all fMRIPrep arguments.
Hello,
I ran the command in <workdir>/fmriprep_22_0_wf/single_subject_<subject>_wf/anat_preproc_wf/surface_recon_wf/autorecon1/command.txt
:
INFO: hi-res volumes are conformed to the min voxel size
fsr-checkxopts: Command not found.
Linux openmind7 3.10.0-1062.el7.x86_64 #1 SMP Wed Aug 7 18:08:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
recon-all -s sub-inside7003 exited with ERRORS at Fri Jun 10 19:53:28 EDT 2022
For more details, see the log file
To report a problem, see http://surfer.nmr.mgh.harvard.edu/fswiki/BugReporting
Tab autocomplete in the singularity shell revealed the only command in PATH that begins "fsr" is fsr-getxopts
.
I mounted my local version of fsr-checkxopts
(from FS6) to the container and it appears to be progressing.
Good find. That should be easy to fix in the next release.
Hi! sorry for being late, but the issue should be fixed now based on what I read here? However, being a newbie with sMRIPrep, what should I do to use the "debugged" version? Re-pull the Docker Image? Thanks in advance! :)
@mafortin I've just tagged sMRIPrep 0.9.1 and fMRIPrep 22.0.0rc1. When those images hit DockerHub, you should be good to go.
FYI: smriprep-docker 0.9.1 ran for one of the two subjects but crashed with the same error code as before for the 2nd subject.
This may be because, in a similar fashion, getfullpath
is not in the docker container, but may be called by FreeSurfer.
Thank you @smeisler for tracking that down!
If there is anybody still on this thread that has either not tested with the latest release (0.9.2) or is still experiencing this error, please let us know ASAP.
Hasn't tested.
If there is anybody still on this thread that has either not tested with the latest release (0.9.2) or is still experiencing this error, please let us know ASAP.
@effigies I'm now experiencing this error with fMRIPrep 22.0.0rc4 using singularity on a remote HPC cluster (Interestingly I do not get an error if I run it locally and not as a job on the cluster. I wonder if its related to the fact that the job nodes on the cluster I'm using do not have access to internet, but I do not understand how that is related to the Autorecon1 error reported in this thread).
I run the following command:
workdir="/project/schapiro_group/marlie/petals/"
echo workdir: ${workdir}
cd $workdir
singularity run --cleanenv --home $HOME --bind ${workdir}:/workdir /project/schapiro_group/marlie/petals/images/fmriprep_22.0.0rc4.sif \
--participant-label sub-002 \
--fs-license-file /workdir/code/license.txt \
--work-dir /workdir/work \
--stop-on-first-crash \
--use-aroma \
/workdir/bids /workdir/bids/derivatives/fmriprep participant
I am getting the same error reported in this thread:
Node: fmriprep_22_0_wf.single_subject_002_wf.anat_preproc_wf.surface_recon_wf.autorecon1
Working directory: /workdir/work/fmriprep_22_0_wf/single_subject_002_wf/anat_preproc_wf/surface_recon_wf/autorecon1
Node inputs:
FLAIR_file = <undefined>
T1_files = ['/workdir/bids/sub-002/anat/sub-002_T1w.nii.gz']
T2_file = <undefined>
args = <undefined>
big_ventricles = <undefined>
brainstem = <undefined>
directive = autorecon1
environ = {}
expert = <undefined>
flags = ['-noskullstrip', '-noT2pial', '-noFLAIRpial', '-cw256']
hemi = <undefined>
hippocampal_subfields_T1 = <undefined>
hippocampal_subfields_T2 = <undefined>
hires = True
mprage = <undefined>
mri_aparc2aseg = <undefined>
mri_ca_label = <undefined>
mri_ca_normalize = <undefined>
mri_ca_register = <undefined>
mri_edit_wm_with_aseg = <undefined>
mri_em_register = <undefined>
mri_fill = <undefined>
mri_mask = <undefined>
mri_normalize = <undefined>
mri_pretess = <undefined>
mri_remove_neck = <undefined>
mri_segment = <undefined>
mri_segstats = <undefined>
mri_tessellate = <undefined>
mri_watershed = <undefined>
mris_anatomical_stats = <undefined>
mris_ca_label = <undefined>
mris_fix_topology = <undefined>
mris_inflate = -n 50
mris_make_surfaces = <undefined>
mris_register = <undefined>
mris_smooth = <undefined>
mris_sphere = <undefined>
mris_surf2vol = <undefined>
mrisp_paint = <undefined>
openmp = 8
parallel = <undefined>
steps = <undefined>
subject_id = sub-002
subjects_dir = /workdir/bids/derivatives/fmriprep/sourcedata/freesurfer
talairach = <undefined>
use_FLAIR = <undefined>
use_T2 = <undefined>
xopts = <undefined>
Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
result["result"] = node.run(updatehash=updatehash)
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 524, in run
result = self._run_interface(execute=True)
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 642, in _run_interface
return self._run_command(execute)
File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 750, in _run_command
raise NodeExecutionError(
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node autorecon1.
RuntimeError: subprocess exited with code 1.
Thanks I really appreciate it.
@marlie-tandoc Can you try this (screenshotted from earlier comment)?
@marlie-tandoc
Thanks for the fast response! I'm quite new to all of this..
Because I'm running fmriprep with singularity on a remote HPC as a batch job (non-interactive) I'm having trouble figuring out how/if I can shell into the container.
If I run my script as an interactive job (or just locally) I could shell in. But I weirdly cannot replicate the autorecon1 error when I run fmriprep as an interactive job or locally so I'm not sure how much help that would be...
Similar to those above if I use --fs-no-reconall everything works fine.
1)
singularity shell --cleanenv --home $HOME --bind ${workdir}:/workdir /project/schapiro_group/marlie/petals/images/fmriprep_22.0.0rc4.sif
2) copy the text located in
/workdir/work/fmriprep_22_0_wf/single_subject_002_wf/anat_preproc_wf/surface_recon_wf/autorecon1/command.txt
3) Run that command in the terminal window that is shelled into the container.
To update, I was unable to shell into the container still through the method above because I was using a non-interactive job script on a remote HPC. It ended up resolving itself after I created scratch directories on the HPC.
Hi - have been scouring the internet to try to understand why my fmriprep is failing at autocon when running through singularity on a supercomputer.
Autorecon 1 appears to be failing and not providing a helpful error message.
singularity run --cleanenv /work/06953/jes6785/Containers/fmriprep_latest.sif /scratch/06953/jes6785/Pre_Process_Test/ /scratch/06953/jes6785/fmri_prep/ participant --participant-label b005 --fs-license-file /scratch/06953/jes6785/NECTARY_DATA/code/license.txt --skip_bids_validation -w /scratch/06953/jes6785/working_dir/
@seaguldee Did you run the troubleshooting steps outlined two comments above?
EDIT: I see that the command.txt says "echo reconall-all: nothing to do"
@seaguldee This seems to be a separate issue since it is not failing during autorecon1, but with _parcstats1. Can you run the similar command.txt from _parcstats1?
@seaguldee are you using 22.0.1? This seems like a problem in 22.0.0 where using a previously computed freesurfer directory failed because of an outdated fsaverage
.
Hi there, I appear to be having the same issue here but with fmriprep 22.0.1. Please let me know if I'm posting this in the wrong place so I can create another thread instead of hijacking this one.
Autorecon 1 appears to be failing and not providing a helpful error message.
#!/bin/bash -e
export SINGULARITY_BIND="/media/hcs-sci-psy-narun/:/hcs/,/home/gibbr625/:/local/"
image_path="/media/hcs-sci-psy-narun/Nesi/Bryn/NKI-singularity/singularity_images/fmriprep-22.0.1.simg"
bids_path="/hcs/Nesi/Bryn/data/NKI-RS-tsv-fix"
out_path="/hcs/Nesi/Bryn/NKI-singularity/fmriprep_out_v3"
work_dir_path="/local/fmriprep_working_dir"
participant_id="xxxxx"
license_path="/hcs/Nesi/Bryn/NKI-singularity/fs_licence/license.txt"
sudo rm -rf "/home/gibbr625/fmriprep_working_dir/"
mkdir "/home/gibbr625/fmriprep_working_dir/" # make sure the working dir is clean
singularity run --cleanenv ${image_path} \
${bids_path} \
${out_path} \
participant --participant_label ${participant_id} \
--write-graph \
--notrack \
--fs-license-file ${license_path} \
--work-dir ${work_dir_path} \
--ignore=slicetiming
22.0.1
Singularity
Yes
No
Node Name: fmriprep_22_0_wf.single_subject_xxxxx_wf.anat_preproc_wf.surface_recon_wf.autorecon1 File: /hcs/Nesi/Bryn/NKI-singularity/fmriprep_out_v3/sub-xxxxx/log/20220921-171857_3e8b8d04-d87a-46b7-9380-a67563f2e149/crash-20220921-172924-gibbr625-autorecon1-65ea6bcd-a3ca-4c8a-87b3-a57b3e66f1cd.txt Working Directory: /local/fmriprep_working_dir/fmriprep_22_0_wf/single_subject_xxxxx_wf/anat_preproc_wf/surface_recon_wf/autorecon1 Inputs: FLAIR_file: T1_files: T2_file: args: big_ventricles: brainstem: directive: autorecon1 environ: {} expert: flags: hemi: hippocampal_subfields_T1: hippocampal_subfields_T2: hires: mprage: mri_aparc2aseg: mri_ca_label: mri_ca_normalize: mri_ca_register: mri_edit_wm_with_aseg: mri_em_register: mri_fill: mri_mask: mri_normalize: mri_pretess: mri_remove_neck: mri_segment: mri_segstats: mri_tessellate: mri_watershed: mris_anatomical_stats: mris_ca_label: mris_fix_topology: mris_inflate: mris_make_surfaces: mris_register: mris_smooth: mris_sphere: mris_surf2vol: mrisp_paint: openmp: 8 parallel: steps: subject_id: recon_all subjects_dir: talairach: use_FLAIR: use_T2: xopts: Traceback (most recent call last): File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node result["result"] = node.run(updatehash=updatehash) File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 524, in run result = self._run_interface(execute=True) File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 642, in _run_interface return self._run_command(execute) File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 750, in _run_command raise NodeExecutionError( nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node autorecon1.
RuntimeError: subprocess exited with code 1.
I followed the debugging steps further reccomended by @smeisler
To do this I ran the same bash script I had been using, but changed singularity run to singularity shell with the same arguments. In the active shell session I ran
cat /local/fmriprep_working_dir/fmriprep_22_0_wf/single_subject_xxxxx_wf/anat_preproc_wf/surface_recon_wf/autorecon1/command.txt
Which returned
recon-all -autorecon1 -T2 /hcs/Nesi/Bryn/data/NKI-RS-tsv-fix/sub-xxxxx/ses-BAS2/anat/sub-xxxxx_ses-BAS2_T2w.nii.gz -noskullstrip -noT2pial -noFLAIRpial -openmp 8 -subjid sub-xxxxx -sd /hcs/Nesi/Bryn/NKI-singularity/fmriprep_out_v3/sourcedata/freesurfer
Upon running this command, I received an error relating to freesurfer missing the licence file. I assumed this was an error due to running as a shell session and exported the location of my freesurfer licence.
export FS_LICENSE=/hcs/Nesi/Bryn/NKI-singularity/fs_licence/license.txt
I then ran the recon-all command again and received the following error
Current Stamp: freesurfer-linux-ubuntu18_x86_64-7.2.0-20210721-aa8f76b
INFO: SUBJECTS_DIR is /hcs/Nesi/Bryn/NKI-singularity/fmriprep_out_v3/sourcedata/freesurfer
Actual FREESURFER_HOME /opt/freesurfer
-rwxr-xr-x 1 root root 156507 Sep 21 18:15 /hcs/Nesi/Bryn/NKI-singularity/fmriprep_out_v3/sourcedata/freesurfer/sub-xxxxx/scripts/recon-all.log
Linux NP-A397a 5.4.0-122-generic #138~18.04.1-Ubuntu SMP Fri Jun 24 14:14:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
#--------------------------------------------
#@# T2/FLAIR Input Thu Sep 22 12:36:32 NZST 2022
/hcs/Nesi/Bryn/NKI-singularity/fmriprep_out_v3/sourcedata/freesurfer/subxxxxx
mri_convert --no_scale 1 /hcs/Nesi/Bryn/data/NKI-RS-tsv-fix/sub-xxxxx/ses-BAS2/anat/sub-xxxxx_ses-BAS2_T2w.nii.gz /hcs/Nesi/Bryn/NKI-singularity/fmriprep_out_v3/sourcedata/freesurfer/sub-xxxxx/mri/orig/T2raw.mgz
mri_convert --no_scale 1 /hcs/Nesi/Bryn/data/NKI-RS-tsv-fix/sub-xxxxx/ses-BAS2/anat/sub-xxxxx_ses-BAS2_T2w.nii.gz /hcs/Nesi/Bryn/NKI-singularity/fmriprep_out_v3/sourcedata/freesurfer/sub-xxxxx/mri/orig/T2raw.mgz
reading from /hcs/Nesi/Bryn/data/NKI-RS-tsv-fix/sub-xxxxx/ses-BAS2/anat/sub-xxxxx_ses-BAS2_T2w.nii.gz...
TR=0.00, TE=0.00, TI=0.00, flip angle=0.00
i_ras = (-1, -0, 0)
j_ras = (-0, 1, 0)
k_ras = (-0, -0, 1)
writing to /hcs/Nesi/Bryn/NKI-singularity/fmriprep_out_v3/sourcedata/freesurfer/sub-xxxxx/mri/orig/T2raw.mgz...
#--------------------------------------------
#@# MotionCor Thu Sep 22 12:36:34 NZST 2022
ERROR: no run data found in /hcs/Nesi/Bryn/NKI-singularity/fmriprep_out_v3/sourcedata/freesurfer/sub-xxxxx/mri. Make sure to
have a volume called 001.mgz in /hcs/Nesi/Bryn/NKI-singularity/fmriprep_out_v3/sourcedata/freesurfer/sub-xxxxx/mri/orig.
If you have a second run of data call it 002.mgz, etc.
See also: http://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/Conversion
Linux NP-A397a 5.4.0-122-generic #138~18.04.1-Ubuntu SMP Fri Jun 24 14:14:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
recon-all -s sub-xxxxx exited with ERRORS at Thu Sep 22 12:36:34 NZST 2022
For more details, see the log file /hcs/Nesi/Bryn/NKI-singularity/fmriprep_out_v3/sourcedata/freesurfer/sub-xxxxx/scripts/recon-all.log
To report a problem, see http://surfer.nmr.mgh.harvard.edu/fswiki/BugReporting```
@BrynGibson do you have full FS outputs in /hcs/Nesi/Bryn/NKI-singularity/fmriprep_out_v3/sourcedata/freesurfer/sub-xxxxx/
? The error indicates there are not. Are you using previously made FS outputs (and if so which version)? Have you tried rerunning without previous outputs and a clean working directory?
I'll also note that the error you are getting is separate than the one originally reported (which was due to some FS functions not being ported in the container).
@BrynGibson do you have full FS outputs in
/hcs/Nesi/Bryn/NKI-singularity/fmriprep_out_v3/sourcedata/freesurfer/sub-xxxxx/
? The error indicates there are not. Are you using previously made FS outputs (and if so which version)? Have you tried rerunning without previous outputs and a clean working directory?
I just ran again making sure to use a clean working directory and clean output directory. Ran into the same error again but this time with both, Node Name: _autorecon_surfs0 and Node Name: _autorecon_surfs1.
The file tree of /hcs/Nesi/Bryn/NKI-singularity/fmriprep_out_v4/sourcedata/freesurfer/sub-xxxxx/
Can be see here
.
├── label
│ ├── lh.cortex+hipamyg.label
│ ├── lh.cortex.label
│ ├── rh.cortex+hipamyg.label
│ └── rh.cortex.label
├── mri
│ ├── antsdn.brain.mgz
│ ├── aseg.auto.mgz
│ ├── aseg.auto_noCCseg.label_intensities.txt
│ ├── aseg.auto_noCCseg.mgz
│ ├── aseg.presurf.mgz
│ ├── brain.finalsurfs.mgz
│ ├── brainmask.auto.mgz
│ ├── brainmask.mgz
│ ├── brain.mgz
│ ├── ctrl_pts.mgz
│ ├── filled.auto.mgz
│ ├── filled.mgz
│ ├── lh.surface.defects.mgz
│ ├── mri_nu_correct.mni.log
│ ├── mri_nu_correct.mni.log.bak
│ ├── norm.mgz
│ ├── nu.mgz
│ ├── orig
│ │ ├── 001.mgz
│ │ └── T2raw.mgz
│ ├── orig.mgz
│ ├── orig_nu.mgz
│ ├── rawavg.mgz
│ ├── rh.surface.defects.mgz
│ ├── segment.dat
│ ├── T1.mgz
│ ├── talairach.label_intensities.txt
│ ├── talairach.log
│ ├── transforms
│ │ ├── bak
│ │ ├── cc_up.lta
│ │ ├── talairach.auto.xfm
│ │ ├── talairach.auto.xfm.lta
│ │ ├── talairach_avi.log
│ │ ├── talairach_avi_QA.log
│ │ ├── talairach.lta
│ │ ├── talairach.m3z
│ │ ├── talairach.xfm
│ │ ├── talairach.xfm.lta
│ │ └── talsrcimg_to_711-2C_as_mni_average_305_t4_vox2vox.txt
│ ├── wm.asegedit.mgz
│ ├── wm.mgz
│ └── wm.seg.mgz
├── scripts
│ ├── build-stamp.txt
│ ├── defect2seg.log
│ ├── lastcall.build-stamp.txt
│ ├── patchdir.txt
│ ├── ponscc.cut.log
│ ├── recon-all.cmd
│ ├── recon-all.done
│ ├── recon-all.env
│ ├── recon-all.env.bak
│ ├── recon-all.error
│ ├── recon-all-lh.cmd
│ ├── recon-all-lh.log
│ ├── recon-all.local-copy
│ ├── recon-all.log
│ ├── recon-all-rh.cmd
│ ├── recon-all-rh.log
│ ├── recon-all-status-lh.log
│ ├── recon-all-status.log
│ ├── recon-all-status-rh.log
│ ├── recon-config.yaml
│ └── unknown-args.txt
├── stats
├── surf
│ ├── autodet.gw.stats.lh.dat
│ ├── autodet.gw.stats.rh.dat
│ ├── lh.defect_borders
│ ├── lh.defect_chull
│ ├── lh.defect_labels
│ ├── lh.defects.pointset
│ ├── lh.inflated
│ ├── lh.inflated.nofix
│ ├── lh.orig
│ ├── lh.orig.nofix
│ ├── lh.orig.premesh
│ ├── lh.qsphere.nofix
│ ├── lh.smoothwm
│ ├── lh.smoothwm.nofix
│ ├── lh.sulc
│ ├── lh.white.preaparc
│ ├── lh.white.preaparc.H
│ ├── lh.white.preaparc.K
│ ├── rh.defect_borders
│ ├── rh.defect_chull
│ ├── rh.defect_labels
│ ├── rh.defects.pointset
│ ├── rh.inflated
│ ├── rh.inflated.nofix
│ ├── rh.orig
│ ├── rh.orig.nofix
│ ├── rh.orig.premesh
│ ├── rh.qsphere.nofix
│ ├── rh.smoothwm
│ ├── rh.smoothwm.nofix
│ ├── rh.sulc
│ ├── rh.white.preaparc
│ ├── rh.white.preaparc.H
│ └── rh.white.preaparc.K
├── tmp
├── touch
│ ├── asegmerge.touch
│ ├── ca_label.touch
│ ├── ca_normalize.touch
│ ├── ca_register.touch
│ ├── conform.touch
│ ├── em_register.touch
│ ├── fill.touch
│ ├── inorm1.touch
│ ├── inorm2.touch
│ ├── lh.autodet.gw.stats.touch
│ ├── lh.cortex+hipamyg.touch
│ ├── lh.cortex.touch
│ ├── lh.inflate1.touch
│ ├── lh.inflate2.touch
│ ├── lh.qsphere.touch
│ ├── lh.smoothwm1.touch
│ ├── lh.smoothwm2.touch
│ ├── lh.tessellate.touch
│ ├── lh.topofix.touch
│ ├── lh.white.preaparc.touch
│ ├── nu.touch
│ ├── rh.autodet.gw.stats.touch
│ ├── rh.cortex+hipamyg.touch
│ ├── rh.cortex.touch
│ ├── rh.inflate1.touch
│ ├── rh.inflate2.touch
│ ├── rh.qsphere.touch
│ ├── rh.smoothwm1.touch
│ ├── rh.smoothwm2.touch
│ ├── rh.tessellate.touch
│ ├── rh.topofix.touch
│ ├── rh.white.preaparc.touch
│ ├── talairach.touch
│ └── wmsegment.touch
└── trash
@BrynGibson To be clear, when you say "clean output directory", are you referring to the FreeSurfer directory (in sourcedata
) or the fmriprep output directory (I don't know your full command, but presumably in the BIDS derivatives
folder)? That is, did you have fmriprep remake the FreeSurfer outputs?
@smeisler by "clean output directory", I mean I provide an empty directory as the output argument when singularity run is called. The following command is what was used. The out_path argument is a empty directory.
export SINGULARITY_BIND="/media/hcs-sci-psy-narun/:/hcs/,/home/gibbr625/:/local/"
image_path="/media/hcs-sci-psy-narun/Nesi/Bryn/NKI-singularity/singularity_images/fmriprep-22.0.1.simg"
bids_path="/hcs/Nesi/Bryn/data/NKI-RS-tsv-fix"
out_path="/hcs/Nesi/Bryn/NKI-singularity/fmriprep_out_v3"
work_dir_path="/local/fmriprep_working_dir"
participant_id="xxxxx"
license_path="/hcs/Nesi/Bryn/NKI-singularity/fs_licence/license.txt"
sudo rm -rf "/home/gibbr625/fmriprep_working_dir/"
mkdir "/home/gibbr625/fmriprep_working_dir/" # make sure the working dir is clean
singularity run --cleanenv ${image_path} \
${bids_path} \
${out_path} \
participant --participant_label ${participant_id} \
--write-graph \
--notrack \
--fs-license-file ${license_path} \
--work-dir ${work_dir_path} \
--ignore=slicetiming```
@BrynGibson can you see if it works when you have a clean working directory AND have fmriprep rerun Freesurfer for you? You can simply rename the subject’s Freesurfer folder in sourcedata so fmriprep won’t find it by default (e.g OLD_sub-xxxx) .
Just following up to this, I updated to 22.0.2 and ran again. This time received an informative error message relating to failure to create symlinks in the output directory. Changed output directory to a local drive instead of a network drive (which didn't support symlinks) and fmriprep ran successfully.
Thanks! Hopefully this is resolved now, but in any case it's getting long enough that we should address further problems in new issues.
What happened?
Autorecon 1 appears to be failing and not providing a helpful error message.
What command did you use?
What version of fMRIPrep are you running?
22.0.0rc0
How are you running fMRIPrep?
Singularity
Is your data BIDS valid?
Yes
Are you reusing any previously computed results?
No
Please copy and paste any relevant log output.
Additional information / screenshots
Each subject is its own job, 32GB 8CPU
Not all subjects have this error, but all subjects came from the same dataset. Looking now I cannot think of anything that discerns the crashing vs passing subjects.
Centos7.6
From the crash file mentioned in the log output: