guruucsd / lateralized-components

Submission to OHBM 2016 on functional lateralization using the neurovault dataset.
0 stars 2 forks source link

Issue 52 summarize differences #56

Closed bcipolli closed 7 years ago

bcipolli commented 7 years ago

Works off of #55 .

Add a comparison between rl combined images and wb images, across components. Simply compute the L2-norm between the wb vs rl value across all components, voxel-by-voxel, then plot the resulting map.

Higher values indicate greater distance / less consistency between wb and rl images (and places to examine further); lower values should indicate greater consistency.

Results: dot_img_5_10_15_20_30_40_50_75_100-x dot_img_5_10_15_20_30_40_50_75_100-y dot_img_5_10_15_20_30_40_50_75_100-z

bcipolli commented 7 years ago

That was with all components across all n_components. this is just for n_components=10. We can run any combo we want...

dot_img_10-x dot_img_10-y dot_img_10-z

bcipolli commented 7 years ago

This should also be run where components that are not matched, are discarded. Would bring more robustness / believability into differences. Now, I can't tell if differences are real, or are due to mismatches (that could have been matched better).

bcipolli commented 7 years ago

Fixed an important bug; I wasn't sign-flipping components where needed. I also changed to correlation, l1, l2 metrics of dissimilarity available (correlation as the default).

comparison_img_5_10_15_20_30_40_50_75_100 correlation-x comparison_img_5_10_15_20_30_40_50_75_100 correlation-y comparison_img_5_10_15_20_30_40_50_75_100 correlation-z

atsuch commented 7 years ago

I was trying this component by component, and found some bugs in the code... I'm fixing it and will push it once it's working, but I wonder what's the best way to use this.

As you mention, unmatched components are being discarded because we are doing the comparison on wb-matching that does not force one-to-one matching. We could do the same calculation using forced one-to-one matching, or we can even do this on a hemisphere basis by finding wb component that best matches the unilateral components for R and L separately (i.e. using unilateral, rather than wb, components as the reference). So I'm thinking of changing the code so that it will do the same type of calculation for different types of matching...We can then compare them to see if there are consistent patterns.

I also like keeping it specific for each n of component and later looking at consistency across the components rather than aggregating everything... I think it's easier to interpret, and it would also prevent my computer from crashing :P What do you think @bcipolli ?

bcipolli commented 7 years ago

So I'm thinking of changing the code so that it will do the same type of calculation for different types of matching...We can then compare them to see if there are consistent patterns.

It'd be great if the code were flexible, and we could specify the type of matching from the command-line.

I also like keeping it specific for each n of component and later looking at consistency across the components rather than aggregating everything

The current code allows this analysis, or the combined analysis. I like the flexibility of each. I think both are meaningful; combining across different #s of components tells us consistency across decompositions, while running on a single decomposition tells us about that specific decomposition.

If what you're really saying is that you want to swap out the default... by all means, go for it :)