Open tsalo opened 6 years ago
We also want to compare across a range of settings within tedana, once we've merged ME-ICA/tedana#155, ME-ICA/tedana#163, and ME-ICA/tedana#164). Here are the ones I think we should compare:
sourceTEs: 0 (all echoes), -1 (opt com)
combmode: t2s
fittype: curvefit
fitmode: all
gscontrol: None , gsr, t1c, gsr & t1c
tedpca: mle, kundu, kundu-stabilize, mdl, aic, kic
wvpca: on and off
tedort: on and off
Are there any other settings we want to check?
With our new focus on a combined validation/software paper, should we really run all of the possible pipelines or should we choose the best options and just run two- one with the minimal decision tree and one with the full decision tree?
I think that it is difficult to know the best options without testing them in a rigorous way, and it also could be data/design specific. It would be in keeping with the theme of previous papers to run heaps and heaps of different methods, as per Dipasquale 2017
And I think we have to show that the new and improved version is actually improved.
I think that we have successfully reduced the number of unknowns that we need to test. We'll have to remove sourceTEs
with the addition of maPCA, so all that's left in my mind are tedort
, the PCA method (minus MLE), the decision tree, and maybe gscontrol (although I'm planning on investigating that in another paper already). I think we can assume that fittype
curvefit > loglinear and combmode
t2s > paid.
I'm leaning toward just choosing one vs. the other for tedort based on theory. I imagine the AfNI folks could weigh in there.
If so, then we're left with the decision tree and PCA method. What do you think of investigating those parameters specifically?
Also, very much agreed that we need to compare against the old version of MEICA, but I think that's about validating tedana rather than determining the optimal settings within tedana.
This discussion makes me think we need to have a separate discussion on scoping. I think for the initial tedana paper, we only need to compare against MEICA. I'm happy to see folks move forward with these other concerns, but that feels like another paper entirely.
But, there are a number of discussions that we need to have, it seems :smile_cat:
Which pipelines do we want to apply to the datasets? Based on discussion in the Google Doc, we want:
v3.2 of the component selection method (currently implemented, but to be moved to a separate branch until v2.5 is re-implemented and validated)Without post-TEDICA processingWith GSRME-ICA/tedana
or should they be customized for the direct pipeline-to-pipeline comparison?