Closed john-d-fox closed 5 years ago
thanks, John.
I'll review this in the next few days.
As I recall, the goal of GramSchmidt()
was to try to make the GS process more transparent in the code by using the Proj()
function directly. We didn't think of error trapping too much. Should we do so? If so, where?
I like your MoorePenrose()
function, because it is also a simple description in code of what this does.
Hi Michael,
-----Original Message----- From: Michael Friendly [mailto:notifications@github.com] Sent: Wednesday, May 8, 2019 5:59 PM To: friendly/matlib matlib@noreply.github.com Cc: Fox, John jfox@mcmaster.ca; Author author@noreply.github.com Subject: Re: [friendly/matlib] GramSchmidt() is defective + Moore- Penrose inverse (#26)
thanks, John. I'll review this in the next few days. As I recall, the goal of GramSchmidt() was to try to make the GS process more transparent in the code by using the Proj() function directly. We didn't think of error trapping too much. Should we do so? If so, where?
The modification of the function that I provided actually works, by incorporating a zero tolerance.
I like your MoorePenrose() function, because it is also a simple description in code of what this does.
Glad you like it.
Best, John
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/friendly/matlib/issues/26#issuecomment-490666347 , or mute the thread https://github.com/notifications/unsubscribe- auth/ADLSAQXQIKOMRQBU4IRECIDPUNELPANCNFSM4HLVCPFQ . https://github.com/notifications/beacon/ADLSAQSO4IDXXB744L6VMWLPUNELPA5 CNFSM4HLVCPF2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZG ODU7PS2Y.gif
The goal is probably not to have numerically stable functions but one can observe that 'svd' in base will behave better numerically than SVD because svd avoids computing X'X. On the other hand svd returns all the singular values including 0 values so the Moore-Penrose inverse could be written as:
MoorePenrose2 <- function(X){ svd <- svd(X) dinv <- svd$d dinv[dinv>0] <- 1/dinv[dinv>0] svd$v %% diag(dinv) %% t(svd$u) }
I tried MoorePenrose, MoorePenrose2 and solve to invert A where
set.seed(123) A <- matrix(rnorm(100), 10)
MoorePenrose2 was almost as good as solve but the Eigen function in SVD didn't converge in MoorePenrose.
Regarding GramSchmidt, I understand that numerical properties are not relevant but in practice, the QR algorithm does the job of Gram-Schmidt, It manages to work with poorly conditioned matrices by pivoting. But then it's solving a slightly different problem than the Gram-Schmidt problem. All the best, Georges
Hi Georges,
When I wrote many of the core functions in the never-published predecessor matrixDemos package, which I used to demonstrate package-writing, the object was to write the functions in pure R code so that they would be transparent, so using svd()
rather than SVD()
would be out of bounds. That's why the functions in the package use each other -- in this case, the proposed MoorePenrose()
calls SVD()
, which calls Eigen()
. But improving SVD()
to make it more numerically stable would be desirable, in my opinion, as long as the code doesn't become too opaque.
Best, John
I added to the current v. 0.9-2 on Github (vs. CRAN: 0.9-1):
GramSchmidt()
, adding a tol= parameterMoorePenrose()
, slightly enhanced to include a tol= parameterPlease pull & examine the latest version. I'll leave this issue open for a while... -M
If Georges doesn’t pick this up, I probably will.
From: Michael Friendly [mailto:notifications@github.com] Sent: Thursday, May 9, 2019 10:16 AM To: friendly/matlib matlib@noreply.github.com Cc: Fox, John jfox@mcmaster.ca; Author author@noreply.github.com Subject: Re: [friendly/matlib] GramSchmidt() is defective + Moore-Penrose inverse (#26)
I added to the current v. 0.9-2 on Github (vs. CRAN: 0.9-1):
Please pull & examine the latest version. I'll leave this issue open for a while... -M
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/friendly/matlib/issues/26#issuecomment-490923034, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ADLSAQQ6I2VSWXLCV36FVBLPUQWYFANCNFSM4HLVCPFQ.
Shortly, I'll begin testing the current v.0.9-2 for release to CRAN. It passes my local checks so far with R Studio. I think what I've done from @john-d-fox is satisfactory, so I'll close this issue.
Hi Michael,
Actually, I've been working on this and have implemented a better SVD algorithm, which I'm still testing, so please hold off releasing the current matlib to CRAN.
Best, John
-----Original Message----- From: Michael Friendly [mailto:notifications@github.com] Sent: Sunday, May 12, 2019 12:40 PM To: friendly/matlib matlib@noreply.github.com Cc: Fox, John jfox@mcmaster.ca; Mention mention@noreply.github.com Subject: Re: [friendly/matlib] GramSchmidt() is defective + Moore-Penrose inverse (#26)
Shortly, I'll begin testing the current v.0.9-2 for release to CRAN. It passes my local checks so far with R Studio. I think what I've done from @john-d-fox https://github.com/john-d-fox is satisfactory, so I'll close this issue.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/friendly/matlib/issues/26#issuecomment-491610475 , or mute the thread https://github.com/notifications/unsubscribe- auth/ADLSAQSQ7GPWUULBPWTBAL3PVBB67ANCNFSM4HLVCPFQ . https://github.com/notifications/beacon/ADLSAQVNCJMVV5YFN2VIPITPVBB 67A5CNFSM4HLVCPF2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMV XHJKTDN5WW2ZLOORPWSZGODVGWC2Y.gif
Hi again,
I looked into more numerically stable algorithms for the SVD.
The apparently most common method, based on reducing the matrix to bidiagonal form using Householder transforms, would be feasible to program directly in R, but having started to do so, I decided that it was too complicated for matlib.
The method that I used is instead based on iterative Jacobi rotations, which turns out to be pretty straightforward:
SVD <- function(X, tol=sqrt(.Machine$double.eps), max.iter=100){
# X: a matrix
# tol: 0 and convergence tolerance
# max.iter: iteration limit
n <- nrow(X)
U <- X
V <- diag(n)
for (i in 1:max.iter){
converged <- 0
for (j in 2:n){
for (k in 1:(j - 1)){
a <- sum(U[ , k]^2)
b <- sum(U[ , j]^2)
g <- sum(U[ , k]*U[ , j])
converged <- max(converged, abs(g)/sqrt(a*b))
if (abs(g) > tol){
z <- (b - a)/(2*g)
t <- sign(z)/(abs(z) + sqrt(1 + z^2))
}
else {
t <- 0
}
c <- 1/(sqrt(1 + t^2))
s <- c*t
T <- U[ , k]
U[ , k] <- c*T - s*U[ , j]
U[ , j] <- s*T + c*U[ , j]
T <- V[ , k]
V[ , k] <- c*T - s*V[ , j]
V[ , j] <- s*T + c*V[ , j]
}
}
if (converged < tol) break
}
if (i > max.iter) stop("singular values did not converge")
d <- rep(0, n)
for (j in 1:n){
d[j]=len(U[ , j])
U[ , j] <- U[ , j]/d[j]
}
ord <- order(d, decreasing=TRUE)
d <- d[ord]
U <- U[, ord]
V <- V[, ord]
zeroes <- abs(d) < tol
if (any(zeroes)){
d <- d[!zeroes]
U <- U[, !zeroes]
V <- V[, !zeroes]
}
list(d=d, U=U, V=V)
}
Here it is applied to Georges's example, for which the current SVD() in matlib fails:
set.seed(123) A <- matrix(rnorm(100), 10) S <- SVD(A) s <- svd(A) all.equal(S$U %% diag(S$d) %% t(S$V), A) [1] TRUE all.equal(abs(s$u), abs(S$U)) [1] TRUE all.equal(abs(s$v), abs(S$V)) [1] TRUE all.equal(abs(s$d), abs(S$d)) [1] TRUE
And here's a rank-deficient example:
B <- matrix(1:9, nrow=3, byrow=TRUE) S <- SVD(B) S$U %% diag(S$d) %% t(S$V) S
So, what should we do? I'd favour introducing a method=c("Jacobi", "eigen") argument to SVD() and provide both methods. The rationale for retaining the current "eigen" method is that it's very simple.
Best, John
-----Original Message----- From: Michael Friendly [mailto:notifications@github.com] Sent: Sunday, May 12, 2019 12:40 PM To: friendly/matlib matlib@noreply.github.com Cc: Fox, John jfox@mcmaster.ca; Mention mention@noreply.github.com Subject: Re: [friendly/matlib] GramSchmidt() is defective + Moore-Penrose inverse (#26)
Shortly, I'll begin testing the current v.0.9-2 for release to CRAN. It passes my local checks so far with R Studio. I think what I've done from @john-d-fox https://github.com/john-d-fox is satisfactory, so I'll close this issue.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/friendly/matlib/issues/26#issuecomment-491610475 , or mute the thread https://github.com/notifications/unsubscribe- auth/ADLSAQSQ7GPWUULBPWTBAL3PVBB67ANCNFSM4HLVCPFQ . https://github.com/notifications/beacon/ADLSAQVNCJMVV5YFN2VIPITPVBB 67A5CNFSM4HLVCPF2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMV XHJKTDN5WW2ZLOORPWSZGODVGWC2Y.gif
I did some more work on SVD() along the lines that I suggested and committed the changes.
OK, let's see where this goes. But recall that the goal is to try to make algorithms/processes in linear algebra more transparent and amenable to explanation. I'll follow up later....
Yes, I agree, but it't really not good practice to form t(X) %*% X, which has a condition number about the square of that of X. So I tried to find the simplest algorithm that I could that didn't do that and also retained the simpler but numerically unstable code as an option.
To build up a set of procedures, it's inevitable that there will be at least one building block somewhere that is based on a procedure that is not 'elementary', that is more complex than you want your students to understand. An interesting parallel is the 'fundamental theorem of algebra' whose proof is analytic. So one of the fundamental tools of algebra has a proof that is quite outside algebra and that doesn't use the techniques algebraists use for the rest of their work. (There is a purely algebraic proof of the fundamental theorem but I think it takes the whole of a thick book).
Another issue is that the traditional conceptual path through linear algebra involves using concepts and methods that one would never want to use for computation. This creates an interesting tension between an accessible theoretical structure that assumes infinite precision and methods that work in real precision. Students who only learn the theory are not at all prepared to create tools that perform well numerically.
So, is there a way of building an accessible conceptual structure for the theory that takes a path through tools that are numerically sound. Maybe we could build almost everything on the SVD. As a concept it's no more difficult that spectral decomposition (indeed I think it's much easier because it can be visualized as displaying properties of the linear transformation it represents: what the transformation does to a unit sphere and cube) yet it provides a building block that is so much more general than the spectral decomposition. Spectral decomposition is just a corollary. Of course, spectral decomposition has its ellipse but you get the same ellipse -- indeed more: the ellipses of the SVD include pancakes -- with the SVD as the image of the unit sphere.
We might never need to teach Gram-Schmidt because it should never be used in computation. There is QR that's analogous but shouldn't be used in computation without pivoting which loses a bit of the pedagogical simplicity.
The SVD even has a song: https://www.youtube.com/watch?v=JEYLfIVvR9I.
Sorry, I couldn't resist this riff on the SVD. I recognize that there's a devil in the details but using the SVD as a basic building block seems like a tantalizing idea.
Summary of where this stands:
eigen
& jacobi
methods for SVD()
and implemented the. I think this is fine because the goal of the package is largely tutorial, rather than most computationally efficient.
I haven't tested this extensively, but it is reasonable and passes CRAN checks. Georges' ideas are really intriguing, and matlib vignettes provide a vehicle for exploring these further with concrete illustrations.
I'm suggesting we close this issue again, and for other, even related topics, open a new issue.
I'm going away in mid June and would like to move to getting the new release out the door, because there are other issues it fixes/
Hi,
All this seems reasonable to me.
Best, John
-----Original Message----- From: Michael Friendly [mailto:notifications@github.com] Sent: Sunday, May 26, 2019 7:56 PM To: friendly/matlib matlib@noreply.github.com Cc: Fox, John jfox@mcmaster.ca; State change state_change@noreply.github.com Subject: Re: [friendly/matlib] GramSchmidt() is defective + Moore-Penrose inverse (#26)
Summary of where this stands:
- Original issue: Problem with Gram-Schmidt, fixed; and a new function for Moore-Penrose. This is done.
- John suggested eigen & jacobi methods for SVD() and implemented the. I think this is fine because the goal of the package is largely tutorial, rather than most computationally efficient. I haven't tested this extensively, but it is reasonable and passes CRAN checks.
Georges' ideas are really intriguing, and matlib vignettes provide a vehicle for exploring these further with concrete illustrations.
I'm suggesting we close this issue again, and for other, even related topics, open a new issue.
I'm going away in mid June and would like to move to getting the new release out the door, because there are other issues it fixes/
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/friendly/matlib/issues/26?email_source=notifications&e mail_token=ADLSAQXN2DIYEBRKV6FQB6DPXMPQXA5CNFSM4HLVCPF2YY3P NVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZ GODWIP3FA#issuecomment-496041364 , or mute the thread https://github.com/notifications/unsubscribe- auth/ADLSAQWRX726SDMPFSAZBNTPXMPQXANCNFSM4HLVCPFQ . https://github.com/notifications/beacon/ADLSAQVCHQBLXDEERVABTTDPX MPQXA5CNFSM4HLVCPF2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LN MVXHJKTDN5WW2ZLOORPWSZGODWIP3FA.gif
Hi,
I thought about adding a GramSchmidt() function and discovered that there already is one! The problem is that it can easily fail misleadingly if the matrix isn't of full column rank. Here's an improved version, and an example -- try the example first with the current version to see what I mean:
I'm also thinking about adding a function for the Moore-Penrose inverse as follows (continuing the preceding example):
Thoughts? John