asadoughi / stat-learning

Notes and exercise attempts for "An Introduction to Statistical Learning"
http://asadoughi.github.io/stat-learning
2.13k stars 1.62k forks source link

Chapter 7 Exercise 2 #82

Open SamBurkart opened 7 years ago

SamBurkart commented 7 years ago

I believe the solutions are all shifted 1 degree of freedom. For a) the function should be 0 everywhere, otherwise the regularization term is infinity, for b) the first dericative has to be 0 --> function needs to be constant, etc.

Jeffalltogether commented 4 years ago

I agree with @SamBurkart.

To add an additional piece of evidence for this answer: the answer to part c) is given in the text on page 278, "When λ → ∞, g will be perfectly smooth—it will just be a straight line that passes as closely as possible to the training points. In fact, in this case, g will be the linear least squares line, since the loss function in (7.11) amounts to minimizing the residual sum of squares."

jdonland commented 2 years ago

Another error: for e), when λ = 0 and the penalty term becomes irrelevant, g will be a function which completely interpolates the training data, since the minimization is over all curves. In particular, it won't be the linear least squares line as the current solution claims.