Closed zhfkt closed 5 years ago
We're unfortunately unable to make major changes to the book. We will add a remark in Chapter 6, after defining the covariance that we assume that covariances are positive definite.
Remark: In this book, we generally assume that covariance matrices are positive definite to enable better intuition. We therefore do not discuss corner cases that result positive semidefinite (low-rank) covariance matrices.
Thank you.
Will you add the same remark in the coursera video ?
We cannot edit the videos directly at this point.
Describe the mistake P196, It says. "we obtain an inner product. We see that the covariance is symmetric, positive definite,"
According to the wiki on the https://en.wikipedia.org/wiki/Covariance#Relationship_to_inner_products , when the covariance is used as inner product of random variable, it should be positive semi-definite and not "positive definite" . The cov(X,X) = 0 implies that X is a constant random variable other than 0.
The video related with this book on the https://www.coursera.org/lecture/pca-machine-learning/inner-products-of-functions-and-random-variables-optional-luMoJ (04:01) also contains this issue.
Location Please provide the
version (bottom of page) Draft (2019-07-27) of “Mathematics for Machine Learning”. Feedback to https://mml-book.com
Chapter Chapter 6.
page P196
line number/equation number The fifth line
Proposed solution Change " positive definite" to positive semi-definite on the pdf (P196) and coursera video (04:01) .
Additional context After searching on the google, I found it seems that the inner product of positive semi-definite is also called "semi norm" or "degenerate inner product" . The author can expand more description on this topic if it is possible. It is also sad to see that the video also contains this issue.