Open bjeurissen opened 7 years ago
My guess is manual testing & tuning. That's @draffelt's domain though.
If said tuning was based on a 2.5mm voxel size, that's 60 voxels per axis. One could I suppose flip the equation and calculate the necessary value for -b
based on the input image size to get that many field voxels?
Yeah, that's all just ad hoc tuning... parameters are really just designed for human heads. I'm not too fussed about this; this is really a step to be looked into manually, if users need it at all at this stage. I like having it included in the FBA pipeline for improved masks (when it works :grin:) and a bit less "bias" during dwi2response tournier
. But in this day and age of multi-tissue processing, mtnormalise
should pick up where dwibiascorrect
left off, or even do the job entirely on its own.
Note that in #1148, my tuning has led me to a 100mm distance between spline control points indeed, so this may help a bit with the issue described here. But there's data and projects where I've instructed RA's / students / collaborators to use a particular set of N4 parameters that is not the scripted one as well. The scripted one is just a "relatively generally applicable" one. We could eventually allow users to set the N4 parameters via the script as well, but as I mentioned, I don't see this as an urgent issue, or an issue at all... Very happy with mtnormalise
performance since we've started using it over here, and since I've gotten some feedback from studies where it was used in the mean time.
The -b parameter of N4BiasFieldCorrection is interpreted in mm.
When dealing with images with sub-millimeter voxel sizes, N4BiasFieldCorrection will allocate an image with dimensions (150/vdim x 150/vdim x 150/vdim), which quickly results excessive memory consumption even on HPC machines.
What was the rationale behind the 150mm parameter?