Closed mhauru closed 6 years ago
Here's a smaller sparse test matrix, obtained by rounding and dropping small entries in the above: problem2.zip
$ octave
octave:1> load problem2.mat
octave:2> size(M)
ans =
1296 1296
octave:3> nnz(M)
ans = 1373
octave:4> M(1,114)
ans = Compressed Column Sparse (rows = 1, cols = 1, nnz = 1 [100%])
(1, 1) -> -800000000 + 300000000i
octave:5> eigs(M,1)
ans = -2.2867 + 19.3469i
octave:6> eigs(M,1)
ans = -11.84834 + 0.33947i
octave:7> max(abs(eig(full(M))))
ans = 0
Of course, this matrix also has the highly degenerate zero eigenvalue. Setting M(1,1) = 10
still triggers the same behavior from ARPACK.
Can you please try with an initial vector of type (M*some_vector)?
octave:17> load problem2 octave:18> M(1,1) = 10; octave:19> opts=struct(); opts.v0 = M*rand(size(M,2),1); eigs(M,1,"lm") ans = 0.91967 - 15.95032i octave:20> max(abs(eig(M))) ans = 10
Sorry, copypasted wrong lines --- v0 in range indeed seems to help: . octave:52> opts=struct(); opts.v0 = M*rand(size(M,2),1); eigs(M,1,"lm",opts) ans = 1.0000e+01 + 4.7035e-17i
Sorry, I was referring to the original problem with trouble_matrix. Can you please try something like
opts=struct(); opts.v0 = M*rand(size(M,2),1); eigs(M,6,"lm",opts)
I get the correct result, both in octave and with a plain fortran program linked to arpack.
octave:8> opts=struct(); opts.v0 = M*rand(size(M,2),1); eigs(M,6,"lm",opts) ans =
0.654193 + 0.000000i -0.084038 + 0.135723i -0.084038 - 0.135723i 0.127446 + 0.000000i -0.045844 + 0.052161i -0.045844 - 0.052161i
Works also for the original problem.
In the routines Xgetv0 it is written that the initial residual is forced to be in the range of the operator OP, that is to be of type OP something. But in the code, this is realized only in the generalized case (bmat .eq. 'G'). Although it is not clear to me why this fixes the problem here (I was not able to find any reference in the literature), I think it is safe to modify Xgetv0 in such a way that the initial vector is always in the range of OP and not only in the generalized case. This would at least clearly fix the following inconsistency: if (bmat .eq. 'G') and M = identity and you provide an initial vector v0, currently arpack generates a new initial vector of type A v0. If (bmat .eq. 'I') and you provide the initial vector v0 (problem equivalent to the previous), arpack takes v0 as initial vector, and you will have different results. If no objections, I will try to make a pull request.
I will try to make a pull request.
Many thanks! Please also include tests :)
I would not include a test based on the original report, since the matrix is huge. Is it fine if I compare the behavior of (bmat .eq. 'G') .and. (M .eq. 'identity') .and. (v0 given) with (bmat .eq. 'I') .and. (v0 given) and the test is passed if the two results do coincide?
I made the pull request. It is still not true that (bmat .eq. 'I') is equivalent to (bmat .eq. 'G') .and. (M .eq. 'identity_matrix'). This looks strange, I will investigate. Anyway, the original issue is fixed for me.
Can we close this issue?
yes, thanks!
For the matrix in http://mhauru.org/trouble_matrix.zip, scipy.sparse.linalg.eigs, which calls ARPACK, gives the wrong dominant eigenvalue. The result depends on the initial guess for the eigenvector. Example python 3 code:
For fun, I've added a piece at the end that finds the correct dominant eigenvalue with a simple power iteration.
Output:
Asking for more than one dominant eigenvalue goes equally wrong, I'm just choosing to showcase the simplest possible thing. I tried increasing ncv and maxiter, but to no help. In case it matters, the dominant eigenvalue of
M
is not degenerate.I originally filed this as a scipy issue (https://github.com/scipy/scipy/issues/8307), but @pv checked the same matrix in Octave and ran into the same behavior, so presuming then that this is a problem with ARPACK.
I think I'm using ARPACK 3.3.0 since that's what comes with scipy 1.0.0.