Open ElanHR opened 9 years ago
Re Presentation: Whoops! I had only seen the formulation in terms of singular values. Makes sense now, thanks!
Cool. :) As an aside, we don't know that Frobenius norm is the right metric to use...we more recently have been playing with the F1 score of the detected matrix.
From the paper:
G_err = ||L{G_true} − L{G_estimated}||F
"Our metric is interpretable, because true connections are the non-zero common entries. Furthermore, each incorrect entry represents a false positive (spurious connection) or false negative (missed connection). A connection between two synapses in a line graph is equivalent to those synapses being coincident on a neuron."
I was wondering about the intuition behind using the Frobenius norm on the graph difference. I understand why this matrix would be a good thing to norm and use for our metric but I don't quite understand how errors individual edge labels translate into sums of singular values for the entire matrix.