Open shnizzedy opened 1 year ago
Where we’re hitting the error, it’s in the
abcd
method of applying the functional registration template. Looking at the code, I think thesingle_step_resampling_from_stc
anddcan_nhp
methods will also hit that error, but I think using this attached config could work ― the default method cuts the 4D files into 10-second 4D files to transform, while the other methods cut the 4D files into 3D files, so there’ll be 1/10 as many files to recombine with thedefault
method, and then everything else will be the same as inabcd-options
FROM: abcd-options pipeline_setup: pipeline_name: cpac_abcd-options_default-func-transform registration_workflows: functional_registration: func_registration_to_template: run: On apply_transform: using: default
Describe the bug
I believe the
$ARG_MAX
limit varies by system, but with long timeseries, where we split into individual timepoints in https://github.com/FCP-INDI/C-PAC/blob/de6407454d35e0cbf0434f7bc34fe44b924e2819/CPAC/registration/registration.py#L3844-L3847 and merge them back into a 4D image in https://github.com/FCP-INDI/C-PAC/blob/de6407454d35e0cbf0434f7bc34fe44b924e2819/CPAC/registration/registration.py#L3935-L3939, we can hitOSError: [Errno 7] Argument list too long: '/bin/sh'
Forcing relative paths via https://github.com/FCP-INDI/C-PAC/blob/2a9750c1a0075e5ba3f50ef3f54686d9357591b5/CPAC/utils/interfaces/fsl.py#L18-L25 was good enough to resolve the issue with the data I tested on brainlife.io, but users with ~2000 TR data are hitting the issue on bridges2
To reproduce
Run a config with
with data with ~2000 timepoints
Preconfig
Custom pipeline configuration
Expected behavior
No crash
Acceptance criteria
$ARG_MAX
C-PAC version
v1.8.5
Container platform
Singularity
Additional context
A few ideas of how to tackle this, combining capital letters with capital Roman numerals:
A. Check against
$ARG_MAX
If the command length <
$ARG_MAX
, proceed as we have been. Otherwise, do one of these other things.B. Do other thing regardless of
$ARG_MAX
Just have one way of merging that is robust to OS command length limitations and always do this method.
I. Further customize
FSLMerge
Ia. with
copyfile=False
as suggested here
Ib.
cd
intomapflow
directoryThe relative paths all go up a level and down into a nibling
mapflow
directory, like../applywarp_func_to_standard_222_/mapflow/
. If wecd
into that directory, we'd save ~33 characters per timepoint (I think Ia would save the same)II. Use a Python merge utility
IIa.
nibabel.funcs.concat_images
as suggested here and here
IIb.
nilearn.image.concat_imgs
as suggested here
I'm sure there are other ways around this too