Closed sina-mansour closed 2 years ago
This might reflect differences in the implementation of OLS between software packages, particularly with respect to how negative eigenvalues are treated in regions with high anisotropy and low SNR. I.e. how to deal with the case when tensor is not positive definite? Is suspect that this would only be a small proportion of voxels and probably won't matter for tract averaged FA estimates.
dwi2tensor
by default does a reweighted least squares with 2 iterations. This is what's been found to be best, with a lot more experimentation than what you'd be willing to invest here. FSL's dtifit
is by default just an ordinary least squares.
See justification in: https://www.sciencedirect.com/science/article/pii/S1053811913005223
I would consider the former to be "preferable"; it's a question of whether it's sufficiently different to justify the pipeline complexity of recomputing such parametric maps, whereas using what's already precomputed has a little more elegance to it.
Is suspect that this would only be a small proportion of voxels and probably won't matter for tract averaged FA estimates.
Yeah, I agree, the maps are mostly similar, although they have some differences. I wouldn't think it'd make a big difference.
This is what's been found to be best, with a lot more experimentation than what you'd be willing to invest here.
Well, I think this would be a good justification to use dwi2tensor
then. This would additionally enable mapping a wider range of metrics that are not already provided by UKB.
UKB only provides a limited set of metrics: {FA, L1, L2, L3, MD, MO, SO, V1, V2, V3} (some of which I'm not really sure what they are and couldn't find the description for)
Alternatively, by using dwi2tensor
followed by tensor2metric
we could generate an extended range of measures (ideally to then choose from #10). Here's a list of what we could potentially have:
UKB-provided metrics will be outputs of FSL dtifit
; see documentation here. I'm not sure how they're calculating the "mode"; but it's something that could be implemented for tensor2metric
if desired & you could find the source.
I'm not sure how useful the mode statistic would be, but it can definitely be a useful addition to tensor2metric
. Apparently mode of anisotropy is a measure ranging from -1 (planar) to +1 (linear).
According to this paper, One candidate tensor shape metric is the mode of anisotropy (MA), not to be confused with the statistical term mode denoting the most frequent item in a set. MA is mathematically orthogonal to FA and quantifies second-order geometric properties, notably resolving whether anisotropy is more planar (e.g., due to predominant crossing fibers within a voxel) or more linear (see Figure S1, available online).
This paper describes it using the following formulation ($K_3$ denotes the mode of anisotropy):
Also, this slide from FMRIB, (page 16) describes it as the third moment of the tensor:
It's mostly studied at the voxel level, so I'm not sure if it would also be a valuable connectivity-level metric.
I think that we should prioritise streamline-based connectivity for upload to UKB. My sense is that tract-averaged measures derived from the tensor are not commonly used as a measure of connectivity.
I would also note on the tensor fit side of things, following discussion in #10, I have some recollection that for the UKB data they used only b=1000 data for tensor fitting?
I would also note on the tensor fit side of things, following discussion in #10, I have some recollection that for the UKB data they used only b=1000 data for tensor fitting?
Yes that is correct, I was using all b-values, simply extracting DTI from the multishell DWI image: (https://github.com/sina-mansour/UKB-connectomics/blob/main/scripts/bash/probabilistic_tractography_native_space.sh#L212-L217)
Which approach do you reckon is more appropriate?
Which approach do you reckon is more appropriate?
For a rank-2 tensor, using only b=1000 data would be more faithful to prior literature, and arguably mitigates violation of the Gaussianity assumption intrinsic to that model. So I'd probably lean towards that. Obviously for higher rank i.e. DKI you have no choice but to use all b-values.
I finally decided to only use the DTI metrics already provided by the UK biobank (FA, MD, MO, S0, and NODDI measures). Given that most studies are likely to use the other connectivity metrics derived (streamline count or fiber bundle capacity), I tried to implement the most straightforward approach which would also avoid future need for justifying that we used our set of computed DTI measures although UKB had provided similar measures. (see #10 for the list of all final measures included)
I have been working on mapping various connectivity measures from the estimated streamlines. (following discussions on #10)
I noticed that UK biobank has already computed some particular tensor metrics (such as FA and MD), but given that this was not an extensive set of all possible measures that we could include, and that the procedure is relatively fast, I tried estimating a tensor and computing such measures using
dwi2tensor
andtensor2metric
.After running this, I noticed that there were some inconsistencies between the measures we mapped and the provided ones. For instance, The generated FA maps were not exactly the same and had slight variations (check the point highlighted by the cross in pictures below). Furthermore, while the visual patterns were mostly similar the absolute values of FA differed significantly between the two:
FA computed by mrtrix commands:
FA provided by UKB
So I just wanted to ask if you think that this is expected or not.
Also please let me know your thoughts about estimating the DTI metrics ourselves or using the metrics already provided.
Furthermore, @Lestropie I used the default options of
dwi2tensor
, do you reckon that's a good decision, or should we change the defaults (using-ols
oriter
)?