Closed dhudsmith closed 3 years ago
@dhudsmith https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/cluster/_spectral.py provides an example how to call https://github.com/pyamg/pyamg/ as a preconditioner for lobpcg. That should probably solve the issue with slow convergence of lobpcg that you observe.
@lobpcg Thank you very much for the information, I'll give it a look.
We were currently using lobpcg with an initial eigenvector guess computed with scipy.sparse.linalg.eigsh
which wraps ARPACK
implementation of the Implicitly Restarted Lanczos Method.
The computed eigenvectors are interpolated on a larger grid, and then used as an initial approximation for scipy.sparse.linalg.lobpcg
Despite the convergence is still being a bit slow with the diagonal preconditioner, this simple technique results in a significant reduction of the relative error of the final eigenvalues, just with a few iterations ~20.
While scipy.sparse.linalg.eigsh
isn't able to manage eigenvector with dimension sizes larger than 50^3, scipy.sparse.linalg.lobpcg
has been able to deal with sizes of the order of 200^3 with the approach described above.
@rafael-fuente yes, what you use is a common and simple way to accelerate lobpcg. It can be combined with AMG preconditioning that I suggest above. We were routinely running 200^3 Laplacian tests for LOBPCG with MG preconditioning already in 2007, e.g., https://arxiv.org/abs/0705.2626
@rafael-fuente How does pyamg preconditioning work for you, I am curious?
@rafael-fuente How does pyamg preconditioning work for you, I am curious?
Its convergence is considerably faster compared to the naive diagonal preconditioner. However, it fails when it's used for potentials having singularities, like the coulomb potential, in a uniform grid. Seems to be related to the large difference in the values the Hamiltonian matrix diagonal takes. Maybe a nonlinear grid mapping solve it. The pyamg package also requires NumPy float128 data type, which only seems to be supported on Linux.
Sorry for my delay in answering. I'm quite busy these days.
@rafael-fuente Thanks! You may need to apply AMG to a shifted Hamiltonian to move its spectrum into positive values to make it work. The convergence theory requires this. I have used pyamg in MS Windows with no issues, to my recollection. In 2d and especially 1d, the Hamiltonian appears to be narrow banded for your approximations, so you can also try using the inverse to the shifted Hamiltonian via banded Cholesky factorization in float32 for preconditioning.
This branch implements an alternative eigensolver routine that has much quicker convergence in some circumstances. The new solver is based on scipy.sparse.linalg.lobpcg. By default the code behaves as before and uses the eigsh solver. To use the new method pass
method="lobpcg"
toHamiltonian.solve
.Unfortunately,
lobpcg
is not nearly as reliable aseigsh
. In some cases,lobpcg
is very slow and produces very wrong results. I have seen better convergence in situations that do not require large grid size. I have only tested on harmonic oscillator. I see no benefit for 1 or 2 dimensions, and a large benefit in 3 dimensions. I have written a script to demonstrate this as discussed below. Further tests are needed to determine the stability in a wider range situations.Though not strictly required, the implementation benefits from providing a preconditioning operator that approximates the inverse of the matrix to be diagonalized (the hamiltonian in this case). The actual inverse cannot be efficiently computed. The current code uses the "Jacobi" or "diagonal" conditioning matrix which is a matrix of zeros except along the diagonal. On the diagonal, the values are simply 1/diagonal of the original hamiltonian values.
To demonstrate the usage, I have created a new example script in a new examples/ folder (btw, perhaps we should put all examples here; top level is looking very crowded :). The example script solves for the eigenstates of a single particle in a 3d isotropic HO. The lobpcg solver is ~40 times faster in my test with relative errors <2% for the first 5 eigenvalues. The 10th eigenvalue has 25%. Here is an example output from running the script: