Open arokem opened 3 years ago
Also, voxel-wise test-retest reliability in HNU1, SNR of large fascicles (CC, CST, Cingulum), SNR of small fascicles (UF, CAB), # short fibers (e.g. <50mm), # long fibers (e.g. >50mm).
And the maybe use the tractometer to grab some others like: # valid bundles, % valid connections, # invalid bundles, % Invalid connections, bundle overlap, bundle overreach, local angular error (1 compartment + multiple compartments)?
Can I ask to rename this issue to "Validation strategies"? As I'm suggesting in the other thread, I feel we are not ready yet to "evaluate" this tool.
We discussed using fiberfox and also fiberfox-wrapper to create datasets with known artifacts.
Yes, I think this would be the best way forward if you think of a very scoped validation. You don't need to worry about noise (let's generate a noise-free version of the fiberfox) and you can generate three versions: only-hm, only-ec, hm-and-ec.
In a second iteration, a janky dataset seems the best next step. We would generate all the necessary inputs for this janky dataset (brain-mask, denoised DWIs, etc.) and then try it.
Finally, @dPys' suggestions would fit better in dMRIPrep's proper - testing all those things requires evaluation in the actual settings the solution is deployed.
Actually, "verification" would be even better than "validation" at this point.
I've managed to generate some example data using fiberfox-wrapper, but I think it'd be worth creating a dedicated workflow based on the old ISMRM tractometer to evaluate the simulated data in a controlled framework that is more conducive to sensitivity analysis.
What are everyone's thoughts about creating yet another repo (at least temporarily) for exactly this purpose? For now, we could call it "EMCbenchmarking", but it could eventually be integrated with dmriprep and/or even used as a template for more general validation and optimization of preprocessing choices...
I've managed to generate some example data using fiberfox-wrapper, but I think it'd be worth creating a dedicated workflow based on the old ISMRM tractometer to evaluate the simulated data in a controlled framework that is more conducive to sensitivity analysis.
Can you provide more details about this workflow you are envisioning?
What are everyone's thoughts about creating yet another repo (at least temporarily) for exactly this purpose? For now, we could call it "EMCbenchmarking", but it could eventually be integrated with dmriprep and/or even used as a template for more general validation and optimization of preprocessing choices...
If we could generate static data (e.g., a dataset without head-motion or eddy currents whatsoever) and introduce between-volume motion later, this would be very beneficial, and such "static" dataset could easily go under the https://github.com/nipreps-data portfolio of testing datasets.
We discussed using fiberfox and also fiberfox-wrapper to create datasets with known artifacts.
We could also use improvements in TRT in a janky dataset as an indication of benefits of the method and comparative evaluation of different methods. We could use the dataset we used here