Closed bcipolli closed 8 years ago
On second thought, perhaps a better measure than the average overlap, is to try and maximize the error (minimize the overlap) between each component and its most similar component. I will try that.
First pass: similarity between an image and it's most similar component increases monotonically with n_components
. I expected the curve to be low, then high, then low again--similar components early (a lot of overlap), and late (duplication). Perhaps this is due to most voxels trending towards zero with larger #s of components.
xaxis: n_components
, yaxis: average error for most similar component
Perhaps useful to also count the number of voxels above a certain threshhold. We want them to be less similar and more sparse.
This should run the analysis with
n_components
from 5 to 50 (step: 5), and plot a graph to see the confusion matrix between components. The idea is that, the less confusion between components, the better they've been separated.Will post a plot once it's been run. Will take a while...