Closed tsalo closed 3 years ago
The reason this crashes is because the processing mask (called the "corrmask" in the code - I should change this to "procmask") is not being calculated properly, probably because there are negative values of the mean over time in the MNI reformatted functional file (happens routinely in fmriprep data). The solution is to set the corrmask explicitly, using "--corrmask out/brainmask.nii.gz" (same mask as you use for the global mean). A better solution is to fix the automatic mask generation code, which I'll look at.
A few other things: --datatstep is usually not needed, since it's already in the NIFTI header. You specify 2, the header says 1 - I assume you did it on purpose, but why is the header wrong? --filterband lfo is the default, so you don't need to specify it. The output maps are strangely quantized. I'm not sure exactly what that means, but I suspect that since the TR in the header is wrong, slice time correction may have been applied incorrectly during fmriprep.
The solution is to set the corrmask explicitly, using "--corrmask out/brainmask.nii.gz" (same mask as you use for the global mean).
That worked! Thanks for the fix. Since there's a workaround, I could close this issue, unless you want to keep it open until you have a more general fix for the automatic mask generation code?
--datatstep is usually not needed, since it's already in the NIFTI header. You specify 2, the header says 1 - I assume you did it on purpose, but why is the header wrong?
Yes, it looks like the version of fMRIPrep they used overwrote the TR in the header. I had to look up the TR from the nilearn
docstring.
--filterband lfo is the default, so you don't need to specify it.
Ah, good to know. I'll drop it from the example call.
The output maps are strangely quantized. I'm not sure exactly what that means, but I suspect that since the TR in the header is wrong, slice time correction may have been applied incorrectly during fmriprep.
I think that's unlikely. fMRIPrep uses the repetition time in the metadata sidecar files, rather than metadata stored directly in the niftis.
On Oct 28, 2020, at 12:21 PM, Taylor Salo notifications@github.com wrote:
The solution is to set the corrmask explicitly, using "--corrmask out/brainmask.nii.gz" (same mask as you use for the global mean).
That worked! Thanks for the fix. Since there's a workaround, I could close this issue, unless you want to keep it open until you have a more general fix for the automatic mask generation code?
Do you have any thoughts about the relative benefits of trying to patch up my masking routine vs. just adopting nilearn.masking.compute_epi_mask()? That would add a dependency, but no sense reinventing the wheel, especially a presumably well-thought out wheel.
Also, another suggestion for your example - it’s usually helpful to use a little spatial filtering in calculating the delay maps - I use half a pixel width as a rule of thumb, so --spatialfilt 2 in this case. It tends to significantly improve the delay estimation.
Do you have any thoughts about the relative benefits of trying to patch up my masking routine vs. just adopting nilearn.masking.compute_epi_mask()? That would add a dependency, but no sense reinventing the wheel, especially a presumably well-thought out wheel.
I was planning to request adding nilearn
as a dependency anyway, since there are a few functions in nilearn
that I think could be used instead of custom ones, like nilearn.image.smooth_img()
, which could perhaps be used instead of rapidtide.filter.ssmooth()
. In any case, nilearn
is fairly stable and I think using it, when applicable, would be a good idea.
Also, another suggestion for your example - it’s usually helpful to use a little spatial filtering in calculating the delay maps - I use half a pixel width as a rule of thumb, so --spatialfilt 2 in this case. It tends to significantly improve the delay estimation.
Will do. I plan to open a PR with the example fairly soon. I think that locally-run examples would be preferable over ones run by RTD, since rapidtide
's workflows are too computationally demanding for RTD's servers to run. I believe that nilearn
uses the same approach, so I can ask one of its devs for details.
As a result of this discussion, I added a feature - if you specify a negative value for GAUSSSIGMA, rapidtide will set it to 0.5 * the mean voxel dimension.
Describe the bug There is a histogram error when running on an example functional file from
nilearn
in MNI space.To Reproduce
First, download data to use for the example, using Python:
Then, run rapidtide from the command line:
This results in the following error:
Expected behavior I expected
rapidtide
to complete, but it failed during the denoising stage.Desktop (please complete the following information):
dev
at 19563ef (should be the same asdev
)Additional context I'm trying to mock up a Jupyter notebook with a reproducible example for the documentation, so I'd like to use data that can be fetched easily with
nilearn
.