mml-book / mml-book.github.io

Companion webpage to the book "Mathematics For Machine Learning"
12.71k stars 2.36k forks source link

Gram-Schmidt process as an example of Gauss elimination #734

Open tteodorescu0 opened 1 year ago

tteodorescu0 commented 1 year ago

Location

  1. Draft (2022-01-11) of “Mathematics for Machine Learning”.
  2. Chapter 3
  3. page 79
  4. the paragraph before example 3.8. I pasted the paragraph below. Screen Shot 2022-12-29 at 3 40 22 PM

Describe the mistakes and the proposed solutions

  1. It should say "Assume we are given a basis $(\tilde{b}_1, . . . , \tilde{b}_n)$ of non-orthogonal and unnormalized vectors." When you start with a set (rather than an $n$-tuple), the matrix $\tilde{B}$ is not well defined.
  2. There is a typo in the augmented matrix. It should say $[\tilde{B}^T \tilde{B}|\tilde{B}^T]$.
  3. There is another typo in the same sentence. It should say "to obtain an orthogonal basis" (rather than "to obtain an orthonormal basis") because Gauss elimination applied to the augmented matrix $[\tilde{B}^T \tilde{B}|\tilde{B}^T]$ does not typically result in unit vectors.
  4. The last sentence needs some qualifiers because Gauss elimination was defined as the algorithm that performs elementary transformations to bring a matrix into its reduced row-echelon form with 1s for all pivots. The Gram-Schmidt method as defined in 3.8.3 would correspond to the part of the Gauss elimination that results in a row-echelon form that is non-reduced and with pivots not necessarily normalized to 1.