open-connectome-classes / StatConn-Spring-2015-Info

introductory material
18 stars 4 forks source link

thresholding #152

Open edunnwe1 opened 9 years ago

edunnwe1 commented 9 years ago

From 03/02 class, can anyone offer an intuition for why thresholding sometimes decreased the loss and other times increased the loss depending on the level of sparsity and the form of sparsity that we had, and why the notable disparities in performance?

akim1 commented 9 years ago

I'm not sure if this answers your question or not. More importantly, this is merely my speculation, but I will still take a shot at it.

If you threshold correctly and correctly define your loss function, your loss should decrease since you've identified a method by which you've differentiated the signal from an otherwise noisy process. All other cases fall under the situation where your threshold includes stuff you don't want, and this could depend on level and form of sparsity.

Generally, I think less sparsity means higher chances of noise looking like signal, and the form of sparsity probably contributes to this

edunnwe1 commented 9 years ago

I think I see what you're saying: so you're saying that when you have a very sparse graph, if you threshold and lose information that could be a greater loss than if you have a pretty dense graph?