Closed thijsdhollander closed 8 years ago
I agree it would be good to revisit this. In many ways, there won't be a simple answer. When this default was set, the data available wasn't great quality, so lowering the threshold generally just gave bad results. This is probably still true to a large extent, but now that high-quality data are available, this issue is going to come up more frequently. It's also true that the new response function estimation and MSMT-CSD will both make matters worse...
I'm in favour of first looking into whether we can 'fix' the response function estimation: I'm confident the response is good, but its size might be overestimated. I'm actually not sure about this, I'd need to look into whether the l=0 term genuinely is larger in the response than its average WM value - bearing in mind that the previous FA-based approach did underestimate its scale given its tendency to select low b=0 voxels due to e.g. Gibbs ringing. If we can fix the scaling, that would be my preferred option. But even if it makes a difference, it might not make enough of a difference.
As to the influence of the hard constraint, I have to admit I've not really had a look into this either. I find it difficult to gauge how big an issue this is, given that for example, I've never noticed any particular issues tracking on a lmax=6 dataset, which presumably would also suffer from a broadening of the FOD, and overall drop in amplitude (?). I'd just need to look into it more closely...
But regardless, the central issue might be that the optimal cutoff will probably be data-dependent...
So based on the comments regarding the scaling of the response function with the new 'tournier' approach in dwi2response
, I had a quick look into how we might fix things. What I've done is run dwi2response
on a stock b=3000, 60 DW direction data set, with a bunch of different combinations of parameters and tweaks to the cost function. These were:
sqrt(peak1) * (1-peak2/peak1)
- originallog(peak1) * (1-peak2/peak1)
- softer weighting on peak1sqrt(sqrt(peak1)) * (1-peak2/peak1)
- another way to soften the weighting on peak1(1-peak2/peak1)
- no weighting on peak1The responses obtained by these different approaches are below, along with their corresponding surface renders (all scaled identically - no normalisation of any kind):
248.3485958 -145.4522604 77.0812397 -30.56563143 8.572607222 # tournier (original) - cost function = sqrt(peak1) * (1-peak2/peak1)
248.3370619 -145.649475 77.46876365 -30.72276485 8.753262067 # tournier (mod) - cost function = log(peak1) * (1-peak2/peak1)
243.9578678 -142.8972574 75.05378841 -29.94105183 8.333250737 # tournier (mod) - cost function = sqrt(sqrt(peak1)) * (1-peak2/peak1)
239.2729777 -108.2260976 48.0024064 -17.98417015 4.418483309 # tournier (mod) - cost function = (1-peak2/peak1)
188.8222915 -74.7922495 35.20168277 -12.96870599 3.294163559 # FA
225.2746951 -124.0713137 64.18617821 -23.61657505 5.968218287 # FA eroded mask x3
tournier (original), sqrt(peak1) * (1-peak2/peak1)
:
tournier, log(peak1) * (1-peak2/peak1)
:
tournier, sqrt(sqrt(peak1)) * (1-peak2/peak1)
:
tournier, (1-peak2/peak1)
:
fa, standard brain mask:
fa, eroded brain mask:
So clearly, using the FA approach with a standard brain mask gives radically different answers, and simply avoiding edge voxels brings the response function much closer to what the tournier method produces (difference in scaling drops from ~30% to ~10%). Note also that even without any peak1 weighting, the l=0 term of the response comes out pretty much the same (only a ~4% drop). So I think the dominant source of difference in the scaling between FA-based approaches and the tournier approach is likely to be the inherent bias in the FA-based approach, rather than the tournier approach being biased towards high intensity regions per se...
I also had a look at the single-fibre masks that were produced by the various procedures:
FA (standard mask):
FA (eroded mask):
tournier (original) - other peak1-weighted approaches were very similar:
tournier (no peak1 weighting):
So clearly we really need to erode the mask somewhat for the FA method - the single-fibre mask is pretty bad otherwise. This was also the default recommendation in MRtrix 0.2. For the tournier approach, it's also clear we need some weighting towards high peak1, otherwise we end up with a weird single-fibre mask, and an unconvincing response function compared to the weighted version. Not sure why the single-fibre mask would come out so weird with the latter, though...
But all in all, this isn't looking as bad as I'd thought. Everything seems to suggest that the tournier approach as currently implemented is just fine. Unfortunately that means that we probably should change the default threshold... I'd need to think about this. Thoughts welcome...
OK, so it's only ~10% change due to the change in response function estimation. That's kinda like changing the tracking threshold from 0.1 to 0.09. Therefore I'd say the dominant influence (in terms of tracking failures) is going to be the combination of MSMT with ACT. Without ACT, MSMT FODs probably terminate streamlines quite happily 'close enough' to the GM-WM interface; but ACT is predicated on the partial volume at that interface being enough to get the streamlines to reach that isocontour.
So I'd be tempted to include an explicit statement in the MSMT documentation that the threshold will need to be changed away from the default if ACT is used. I reckon the default 0.1 threshold is probably fine for everything other than this specific combination, and therefore shouldn't be changed. If the 'recommended' SSST CSD implementation becomes a hard constraint down the track, maybe revisit then.
Regarding the FA algorithm: With the script implementation, it will use the mask faithfully if provided with one; if you don't, it will automatically erode the automatically calculated brain mask by 3 voxels.
Regarding the FA algorithm: With the script implementation, it will use the mask faithfully if provided with one; if you don't, it will automatically erode the automatically calculated brain mask by 3 voxels.
Wow, hadn't realised! Very smart. Confirmed, supplying no mask produces essentially the same results as what I got with the eroded mask.
I'll close this one for now; it's not an urgent issue... we can indeed revisit later if hard constraints become more common.
If we would still mention it in the MSMT documentation, I'd mention it in general though (with an extra mention for the ACT interactions). Even when tracking is done without ACT, it's probably not a good thing that tracks can't reach GM areas; users might be following up with tckedit
and GM parcels from atlases to select bundels -- it wouldn't be a good thing if they then end up missing out on important parts of whole bundles (I envision dangerous situations in surgical scenarios, even though we clearly mention MRtrix isn't guaranteed to be fit for such (or any) purposes). If mentioned in the MSMT documentation, we could refer to the MSMT paper (and figures) that show "the effect of not lowering the 0.1 threshold", or in other terms "the potential benefits / gains of lowering the threshold".
With respect to this community post: http://community.mrtrix.org/t/msmt-csd-wholebrain-tracking-not-enough-streamlines/215
...and more and more people actually lowering the default 0.1 FOD amplitude threshold due to various reasons (Tournier response algo for all of them; extra tissue types as well as the hard constraint dampening the peak amplitude quite a bit for the MSMT-CSD users)...
...I was wondering: is the default
tckgen
FOD amplitude threshold still appropriate?The annoying thing though, is that "0.1" is a nice number, and things like "0.06" or "0.08" are a bit less. Then again, there's not really a reason why "0.1" should be the magical number. So yeah, a bit unsure here. What do people think?