open-connectome-classes / StatConn-Spring-2015-Info

introductory material
18 stars 4 forks source link

loss function and bias and variance #150

Open edunnwe1 opened 9 years ago

edunnwe1 commented 9 years ago

In class on 03/02, we said that the loss function can be partitioned into a bias term and a variance term. The idea was that we could imagine increasing the bias while reducing the variance and there would be a net effect on the loss. I'm not sure what this looks like formally, i.e. how the loss function could formally be partitioned into the bias and variance terms. Can someone elaborate? thanks!

jtmatterer commented 9 years ago

I don't know if you've found an answer to your question in the last week and a half, but I can give you a simple example. In parameter estimation, if you use squared error loss, the risk (which is mean squared error) can be partitioned exactly into the estimator's variance and its bias squared.

If you would like a concrete example, consider the model unif[0,theta], where theta is the unknown parameter to be estimated. The method of moments estimator is unbiased, but has a larger MSE than the MLE, which is not unbiased.

edunnwe1 commented 9 years ago

Thanks! I found a proof of what you were saying. http://www.cc.gatech.edu/~lebanon/notes/estimators1.pdf