bbfrederick / rapidtide

rapidtide - a suite of programs for doing time lag correlation analysis on fMRI data
Apache License 2.0
75 stars 14 forks source link

rapidtide stops without error messages #116

Closed Ludoalevesque closed 1 year ago

Ludoalevesque commented 1 year ago

Describe the bug I'm still verry new to this so I might be doing somthing dumb, but when I run rapidtide using the docker container's latest version, it seems to be lauching but it quickly stops whitout ever giving me information on why it stopped. Here is what's in the log :

running single process - disabled shared memory use setting internal precision to double setting output precision to single input file is NIFTI oversample factor set to 3 fmri data: 575 timepoints, tr = 1.0749996900558472, oversamptr = 0.3583332300186157 271633 spatial locations, 575 timepoints

Desktop (please complete the following information):

Additional context Add any other context about the problem here. I tought it might be a memeory issu on my personnal computer but I ran it with the same data and the same version on a mac and the same thing happened.

The command I want to use is :

docker run \ --mount type=bind,source="${input_dir}",destination=/data_in \ --mount type=bind,source="${output_dir}",destination=/data_out \ -it \ fredericklab/rapidtide:latest \ rapidtide \ "/data_in/${bold_file}" \ "/data_out/${sub}" \ --filterband lfo \ --delaymapping \ --passes 3

bbfrederick commented 1 year ago

That's weird - from what you sent, it is finding your file, so it doesn't seem to be a mounting problem.

Does it terminate, or just hang after printing "271633 spatial locations, 575 timepoints"?

Ludoalevesque commented 1 year ago

It terminates. Sometimes I also get these two more lines before it stops: startpoint set to minimum, (0) endppoint set to maximum, ( 574 )

bbfrederick commented 1 year ago

When you say latest, do you literally mean ":latest", or ":v2.6.1"?

Ludoalevesque commented 1 year ago

I pulled the one tagged latest yesterday, I'll try again with v2.6.0

Ludoalevesque commented 1 year ago

I get the same thing with the v2.6.0 version

bbfrederick commented 1 year ago

Actually, I just tested both, and didn't get the problem. So it's either the data, or it's the intel version of the container (I tested it on an ARM Mac). That's unlikely, so it's probably a feature of the data. How was it preprocessed? I've run into problems before when data doesn't conform to my preconceived notions of what NIFTI fMRI data would look like...

Ludoalevesque commented 1 year ago

It's an output from fmriprep from which I regressed out the confounds.

bbfrederick commented 1 year ago

Ok, well it should be perfectly happy with that. Is there a way you could send me the dataset so I could take a look? Also, when I get home I can try running it on an intel Mac - it COULD be a problem in the container, so I should rule that out, but that still seems kind of unlikely.

If you're going to try an older container, try 2.3.1. That's the last intel-only container.

Ludoalevesque commented 1 year ago

Ok, I will try with that container. concerning the data, the main steps it went trough are head motion correction, slice timing correction, SDC and normalization. It's in standard space (MNI152NLin6Asym). The file I used corresponds to the 'space-MNI152NLin6Asym_desc-preproc_bold.nii.gz' file in the fmriprep outputs for functionnal data from which I regressed out the counfouds ('desc-confounds_timeseries.tsv') using nilearn and the load_confounds_strategy API. I will send you the data via email. Thank you

bbfrederick commented 1 year ago

I think your docker container is running out of memory. I was able to run your data successfully in a 2.6.1 docker container with 24gb of ram and 1gb of swap. Your file is pretty big - 576 time points at 2mm isotopic is going to be several gb, and rapidtide keeps multiple copies of the data round, so it would make sense that uuud need to allocate a fairly sizeable amount. If you don't have that much ram, then try upping the swap space.

Ludoalevesque commented 1 year ago

Thank you for the feedback ! I had ruled out memory issu since I tried it on an other computer with much better specs than mine and it still crashed, but I realise that I underestimated the memory needed. I will run it again on a High performance cluster with the ressources you suggested.

Le dim. 20 août 2023 21 h 43, Blaise deB Frederick @.***> a écrit :

I think your docker container is running out of memory. I was able to run your data successfully in a 2.6.1 docker container with 24gb of ram and 1gb of swap. Your file is pretty big - 576 time points at 2mm isotopic is going to be several gb, and rapidtide keeps multiple copies of the data round, so it would make sense that uuud need to allocate a big amount. If you don't have that much ram, then try upping the swap space.

— Reply to this email directly, view it on GitHub https://github.com/bbfrederick/rapidtide/issues/116#issuecomment-1685494125, or unsubscribe https://github.com/notifications/unsubscribe-auth/A5FGB7SERDBHGIGL3PZDVNLXWK4J5ANCNFSM6AAAAAA3UTQCJY . You are receiving this because you authored the thread.Message ID: @.***>

Ludoalevesque commented 1 year ago

I tried it with more memory and it worked! Thank you. I think an "Out of memory" error message would be a nice feature to add eventually. Thanks again for taking the time to try my data.

bbfrederick commented 1 year ago

I'm surprised it didn't generate one, but I can certainly add an explicit check. Silent failures are not helpful.