Closed jacobtfisher closed 2 years ago
Update: I attempted to run the analysis on a cluster node with a bit more memory to see if that could get around the MemoryError
, and that produced a new error:
Current processing step:
Temporally filtering image and confounds
····································································
· [butterworth filter]
· [High pass frequency: 0.01]
· [Low pass frequency: 0.2]
· Interpolating over masked-out epochs...
· This will be slow
/tmp/sub-03_run-1-regress-836821185~TEMP~_DSP_DMT.nii.gz
Voxel bin 1 out of 75
Traceback (most recent call last):
File "/xcpEngine/utils/interpolate.py", line 197, in <module>
term_recon[i,:,:] = np.outer(term_prod[i,:],s[i,:])
ValueError: could not broadcast input array from shape (649,3000) into shape (648,3000)
Error in asNifti.default(image, internal = TRUE) :
Failed to read image from path /tmp/sub-03_run-1-regress-836821185~TEMP~_DSP_DMT_TMP_interpol.nii.gz
Calls: dumpNifti -> asNifti -> asNifti.default -> .Call
In addition: Warning message:
In asNifti.default(image, internal = TRUE) :
nifti_image_read: failed to find header file for '/tmp/sub-03_run-1-regress-836821185~TEMP~_DSP_DMT_TMP_interpol.nii.gz'
Execution halted
· [A major error has occurred.]
· [The processing stream will now abort.]
· [Preparing diagnostics]
····································································
On further examination, it looks like the error occurs whether or not I use AROMA, so it might just be a problem with interpolate.py
that is independent of using AROMA or not.
Sorry Jacob for late response
with *contig[2]=4
and *censor[2]=1
, you are doing scrubbing. The interpolation over masked-out timepoints is not complete. the issue is likel memory and this may due to high number of flagged volumes. You can check *-nFlags.1D
to see the flagged volumes ( with 1) if they are many. You can control it by reducing the threshold( although fds;1.25
means 0.5mm with 0.4s TR) or reducing the number of contiguous volume (from 4).
No problem, Azeez -- thanks for taking a look. I was able to get around the memory error by using a more powerful machine, but then get the ValueError
that I mentioned above.
ValueError: could not broadcast input array from shape (649,3000) into shape (648,3000)
I did also try running with confound2_censor_contig[2]=0
and I get the same error. The number of flagged volumes is pretty low (7 total for this run), so I would hope that isn't too many to interpolate over. I've tried a number of things and I'm pretty stumped at this point. I suppose I could just avoid censoring/scrubbing, but there are some notable spikes for a few subjects so I'd like to be able to remove those volumes if possible.
Just checking in here — is there anything else that you suggest that I take a look at? I'm glad to dig in a bit more into interpolate.py
to see if there is anything I may be able to spot, but I didn't want to do that if there was something simpler that I might be overlooking. Thanks again!
pls Jacob, can you help with example of data with TR < 1 pls
Hey Azeez, thanks for taking another look. I don't have the dataset I'm working with uploaded to a public repository yet, but I did find a dataset that was collected on the same scanner at UCSB. The TR of this one is 720ms.
was there any resolution to this problem?
I have a similar problem, getting oom exception when running on a cohort with 4 sessions with this count of flagged volumes: 78,0,7,0
I am using HCP dataset with TR = 0.72s at 0.2mm. So the final threshold is fds:0.278
I investigated the code a little bit and looks like its trying to allocate a ~129GB matrix (4500 x 1200 x 3000 of float64) which is over what I have in my HPC node. The last dimension of the matrix is the number of voxels that will change simultaneously. From "https://github.com/PennLINC/xcpEngine/blob/master/utils/interpolate.py#L56":
parser.add_argument(
'-v', '--voxbin', action='store', default=3000, type=int,
I just have to pass a flag --voxbin 1000
in utils/tfilter .
Is there a simple way to patch my singularity image with a fix for this?
Would adding this as a param in the regress module make sense?
Feel free to use this docker image until a better fix comes along: https://hub.docker.com/repository/docker/dhasegan/xcpengine
created with this dockerfile:
FROM pennbbl/xcpengine:1.2.4
RUN sed -i 's/interpolate.py /interpolate.py -v 1000 /g' /xcpEngine/utils/tfilter
ENTRYPOINT ["/xcpEngine/xcpEngine"]
it is better to check your data, may be you have more volumes flagged as outliers
Seems like setting that voxbin to 500 fixes the problem for me and now I can run with 64GB memory
@dhasegan thank you! we will set the voxbin base on the size of the volume
Describe the bug Attempting to use censoring with AROMA.
Cohort file Paste cohort file between the triple backticks
Design File Paste your entire design (
.dsn
) file between the triple backticksError message Paste your error message between the backticks
Runtime Information Running latest Docker version (pulled today) on Ubuntu 18.
Additional context The pipeline runs without issue if I remove the censoring step. This is multiband data (400ms TR), so the censoring thresholds are different than the example to account for multiplying by the < 1 sec TR to arrive at the final threshold.