DCAN-Labs / abcd-hcp-pipeline

bids application for processing functional MRI data, robust to scanner, acquisition and age variability.
https://hub.docker.com/r/dcanumn/abcd-hcp-pipeline
BSD 3-Clause "New" or "Revised" License
48 stars 19 forks source link

swarm/cloud v0.1.0 #72

Closed juansanchezpena closed 2 years ago

juansanchezpena commented 2 years ago

Dear all, We have been using our local swarm to successfully run our data with two time point acquisitions using:

/usr/local/pipelines/sub-abcd-latest -i /MRI_DATA/nyspi/xxxx/rawdata -o /MRI_DATA/nyspi/xxxx/derivatives/abcd -l /MRI_DATA/nyspi/xxxx/derivatives/abcd/sub-xxxx.log -a " --participant-label xxxx --all-sessions " -p sub-xxxx (the raw data is sub-xxxx/ ses-xxxxxx, ses-yyyyyyyy. The first set has anat and bold, the last one just bold.) The process runs to completion (through DCAN/Exec Summary) for all runs when running on local swarm.When we run the same image using aws we fail to reproduce the results and are consistently getting an fMRI Volume error: ``` Traceback (most recent call last): File "/app/run.py", line 374, in _cli() File "/app/run.py", line 68, in _cli return interface(kwargs) File "/app/run.py", line 370, in interface stage.run(ncpus) File "/app/pipelines.py", line 588, in run self.teardown(result) File "/app/pipelines.py", line 534, in teardown self.class.name) Exception: error caught during stage: FMRIVolume



There are no errors in the .err files in logs, let the .out files seem to stop before completion in some cases from both sessions.

Have you seen anything like this before with multi session data 
Thanks
Juan
juansanchezpena commented 2 years ago

This was a space issue on aws it has been resolved