Open dchristiaens opened 9 years ago
Yeah this one pops up fairly regularly in discussion.
My main thought is that the nonstationarity of the gradient directions isn't as much of a problem as variability in the b-values. Even if you recalculated the inverse response independently for each voxel based on a per-voxel gradient table, the actual response profile would still be wrong.
I haven't looked at specific details to assess the feasibility, but if I were to attack this myself, I reckon I'd be doing a q-space re-gridding... or 're-shelling', as it were. Far from perfect as you'd be relying on a projection of the signal decay as a function of b-value, but might be better than doing nothing.
Yes, we've been thinking about how to best handle this for a while, and it's not trivial. The performance concerns are not a problem, CSD needs to do a matrix solve operation per iteration anyway. The pseudo inverse is never computed, apart from getting an initial estimate of the FOD (which doesn't need to be that close anyway, so I wouldn't bother changing that). The only real extra overhead is computing the correct SH transform matrix for each voxel, but assuming the directions can be assumed not to change that would be completely negligible. If you do need to recompute the directions, and hence the SH coefficients themselves, then that would add a bit of work per voxel, but only once per voxel rather than once per iteration, so probably not too big a deal.
The bigger issue, as Rob said, is getting the correct response function for the b-values you actually used. That requires some form of inter/extra-polation over b-values, which isn't obvious given the data-driven approach that we've been using. Other approaches don't necessarily have that problem, since they often use a model (e.g. axially symmetric tensor with fixed FA) that inherently provides the response as a function of b-value.
That said, this is the direction we're heading in for Ben multi-tissue CSD. Having a multi-shell response provides a way to interpolate over the b-value domain, and so could be used to achieve what you're talking about. On top of that, there's a lot of other reasons why we'd like to be able to have a smooth representation of the response function over both b-value and direction domains - more on that (hopefully) at the ISMRM...
Ah, I would simply apply the closest rotation matrix from the polar decomposition, aka finite strain, to keep the b-values unchanged. But you're right, of course, a general affine transformation would modify the b-values.
OK, then you're probably only talking about doing this to account for local transformations following spatial normalisation. Not sure why you'd want to do this given that we have better methods, including Dave Raffelt's MRM paper and Thijs Dhollander's extension to q-space.
What I thought you were talking about was handling the HCP data, which has relatively strong spatial b-value variations across it due to gradient non-linearities. I'm guessing the directions would also be affected to some extent, since the non-linearities are unlikely to be identical for all 3 axes, but admittedly I haven't looked into that at all...
In any case, I think this is something that we are looking into, and I anticipate we'd have an infrastructure in place to deal with this type of problem relatively soon.
I was talking about processing the HCP data. However, for lack of a full affine reorientation, I though of at least correcting the rotation part. Anyway, the transformations seem to be relatively small, and my global tracking is not affected all that much, so I'm just ignoring it for now. I look forward to the upcoming fix.
Hi guys, Very interesting to read about the issues in incorporating the various parts of gradient nonlinearity correction. I was wondering, hoping mostly, whether there's any update on this now that we're a few months down the line...
Hi @sjoerdvos - fraid not... I don't think we're any closer now than we were then... Sorry.
Thanks Donald. Good to know for sure :)
Following up on the second part of Dan Wu's question, +1 for being able to use a voxel-specific gradient table in the reconstruction. The preprocessed HCP datasets come with a file specifying a different rotation of the gradient table in every voxel, and as far as I know, MRtrix currently does not support to correct for this. Given that many people will be using this data in the years to come, it would be a nice feature.
That being said, the current CSD implementation would require quite some changes, as you can no longer work with one precomputed pseudoinverse. Any thoughts on how to implement this with minimal changes to the codebase?