Open Lestropie opened 3 years ago
This should be a fairly straightforward modification, if you want it. Just a few thoughts and concerns:
dwidenoise
overhaul for 3.0.0, I haven't seen such large discontinuities anymore. Therefore, I think it might not be an issue any longer.If you were to have a go at implementing this, I would suggest a simpler approach in which dwidenoise
can take a noise level map as input (bypassing the MP noise level estimation). Your suggested scenario can then be achieved with two dwidenoise
steps and an intermediate filtering. A user can then inspect & decide what level of smoothing to enforce. Moreover, this would also support using the true noise level map from the calibration scans at the start of the sequence (for those lucky souls that have access to in-house reconstruction).
In data with high MB factors, the g-factor maps can often show discontinuities.
Is this purely in the slice encoding axis? If so filtering could be constrained in-plane in such cases.
I therefore think it's a good diagnostic to have, and I prefer to be alerted of a potential problem rather than smooth it out.
Whether implemented within a single invocation or done explicitly by the user using two invocations, it wouldn't preclude having access to the unfiltered noise level for assessment.
If you were to have a go at implementing this, I would suggest a simpler approach in which dwidenoise can take a noise level map as input (bypassing the MP noise level estimation).
Yeah, that's a reasonable option. Disadvantage would be that if one wanted to write a wrapper script that implemented the two-pass approach, it would have to have a different name. Implementing the two-pass approach in C++ might not actually be that much more complex. But given I don't intend to implement anything right now I'm not thinking about it too hard.
In data with high MB factors, the g-factor maps can often show discontinuities.
Is this purely in the slice encoding axis? If so filtering could be constrained in-plane in such cases.
No, g-factor maps can be discontinuous in-plane too.
Don't recall whether or not this was discussed / published anywhere, but couldn't find it in the list, and wanted to write it down to get it off my mind.
Currently each voxel is processed entirely independently, with a noise level estimate from the data in that voxel then used to subtract components from the data in that voxel. The noise level is expected to vary reasonably smoothly throughout the image, and hence if voxels have estimated noise levels that differ substantially from their adjacent neighbours that would suggest an erroneous threshold estimation.
It should however be possible to instead simply enforce that smoothness. Using a first pass through the data to estimate the noise level, that image could be e.g. median filtered and smoothed to produce a more plausible noise level map, with a second pass through the data then applying the corresponding threshold for component removal.