Hello I am not sure if anyone else has run into this issue - so I've set up the heuristics file with no issues and it runs fine for the most part. That is, until I deal with a subject with large amounts of dicom files e.g. 9000+.
It gave me the error: Argument list too long. After a bit of googling, I think it may be due to a kernel limit on the amount of space for arguments to a command... so I tried a solution of setting a higher limit: ulimit -s 65536.
Whilst I no longer get an error, I am failing to get actual resultant nii files. Moreover, the logging on the command line leads me to believe the conversion was done properly:
INFO: Doing conversion using dcm2niix
INFO: Populating template files under /home/user/ADNI/Nifti/
INFO: PROCESSING DONE: {'subject': '0178', 'outdir': '/home/user/ADNI/Nifti/', 'session': 'itbs'}
It spawns in the typical BIDS files:
dataset_description.json
participants.json
participants.tsv
scans.JSON
README
CHANGES
Can someone please give suggestions of how to get around this? it may be that the original source dataset is a bit messed up because I'm not sure 9000+ files is very common. There are a number of patients that suffer from this limitation. Thank you!
Converting ADNI dataset
Hello I am not sure if anyone else has run into this issue - so I've set up the heuristics file with no issues and it runs fine for the most part. That is, until I deal with a subject with large amounts of dicom files e.g. 9000+.
It gave me the error: Argument list too long. After a bit of googling, I think it may be due to a kernel limit on the amount of space for arguments to a command... so I tried a solution of setting a higher limit:
ulimit -s 65536
.Whilst I no longer get an error, I am failing to get actual resultant nii files. Moreover, the logging on the command line leads me to believe the conversion was done properly:
INFO: Doing conversion using dcm2niix INFO: Populating template files under /home/user/ADNI/Nifti/ INFO: PROCESSING DONE: {'subject': '0178', 'outdir': '/home/user/ADNI/Nifti/', 'session': 'itbs'}
It spawns in the typical BIDS files:
Can someone please give suggestions of how to get around this? it may be that the original source dataset is a bit messed up because I'm not sure 9000+ files is very common. There are a number of patients that suffer from this limitation. Thank you!
Platform details:
Choose one:
[x] Local environment
[ ] Container
Heudiconv version: 1.1.6