quantum-visualizations / qmsolve

⚛️ A module for solving and visualizing the Schrödinger equation.
BSD 3-Clause "New" or "Revised" License
931 stars 119 forks source link

Feature lobpcg #6

Closed dhudsmith closed 3 years ago

dhudsmith commented 3 years ago

This branch implements an alternative eigensolver routine that has much quicker convergence in some circumstances. The new solver is based on scipy.sparse.linalg.lobpcg. By default the code behaves as before and uses the eigsh solver. To use the new method pass method="lobpcg" to Hamiltonian.solve.

Unfortunately, lobpcg is not nearly as reliable as eigsh. In some cases, lobpcg is very slow and produces very wrong results. I have seen better convergence in situations that do not require large grid size. I have only tested on harmonic oscillator. I see no benefit for 1 or 2 dimensions, and a large benefit in 3 dimensions. I have written a script to demonstrate this as discussed below. Further tests are needed to determine the stability in a wider range situations.

Though not strictly required, the implementation benefits from providing a preconditioning operator that approximates the inverse of the matrix to be diagonalized (the hamiltonian in this case). The actual inverse cannot be efficiently computed. The current code uses the "Jacobi" or "diagonal" conditioning matrix which is a matrix of zeros except along the diagonal. On the diagonal, the values are simply 1/diagonal of the original hamiltonian values.

To demonstrate the usage, I have created a new example script in a new examples/ folder (btw, perhaps we should put all examples here; top level is looking very crowded :). The example script solves for the eigenstates of a single particle in a 3d isotropic HO. The lobpcg solver is ~40 times faster in my test with relative errors <2% for the first 5 eigenvalues. The 10th eigenvalue has 25%. Here is an example output from running the script:

------------------------------
Solving with scipy.sparse.linalg.eigsh
Computing...
Took 36.023282051086426
Eigenvalues (eigsh) : [ 6.03533971 10.04416034 10.04416034 10.04416034 14.0257434  14.0257434
 14.0257434  14.05298098 14.05298098 14.05298098]
------------------------------
Solving with scipy.sparse.linalg.lobpcg
Computing...
Took 0.9391400814056396
Eigenvalues (lobpcg): [ 6.03600915 10.05316063 10.0997381  10.24087437 14.05243788 14.2446791
 14.31203711 14.66924361 15.23159918 18.16693888]
------------------------------
Solving with foo
Computing...
foo solver has not been implemented. Use one of ('eigsh', 'lobpcg')
------------------------------
Diff. over mean: [1.10914213e-04 8.95670917e-04 5.51807386e-03 1.93949903e-02
 1.90143947e-03 1.54886751e-02 2.02057964e-02 4.29119016e-02
 8.04941167e-02 2.55367358e-01]

Process finished with exit code 0
lobpcg commented 3 years ago

@dhudsmith https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/cluster/_spectral.py provides an example how to call https://github.com/pyamg/pyamg/ as a preconditioner for lobpcg. That should probably solve the issue with slow convergence of lobpcg that you observe.

rafael-fuente commented 3 years ago

@lobpcg Thank you very much for the information, I'll give it a look.

We were currently using lobpcg with an initial eigenvector guess computed with scipy.sparse.linalg.eigsh which wraps ARPACK implementation of the Implicitly Restarted Lanczos Method. The computed eigenvectors are interpolated on a larger grid, and then used as an initial approximation for scipy.sparse.linalg.lobpcg Despite the convergence is still being a bit slow with the diagonal preconditioner, this simple technique results in a significant reduction of the relative error of the final eigenvalues, just with a few iterations ~20.

While scipy.sparse.linalg.eigsh isn't able to manage eigenvector with dimension sizes larger than 50^3, scipy.sparse.linalg.lobpcg has been able to deal with sizes of the order of 200^3 with the approach described above.

lobpcg commented 3 years ago

@rafael-fuente yes, what you use is a common and simple way to accelerate lobpcg. It can be combined with AMG preconditioning that I suggest above. We were routinely running 200^3 Laplacian tests for LOBPCG with MG preconditioning already in 2007, e.g., https://arxiv.org/abs/0705.2626

lobpcg commented 3 years ago

@rafael-fuente How does pyamg preconditioning work for you, I am curious?

rafael-fuente commented 3 years ago

@rafael-fuente How does pyamg preconditioning work for you, I am curious?

Its convergence is considerably faster compared to the naive diagonal preconditioner. However, it fails when it's used for potentials having singularities, like the coulomb potential, in a uniform grid. Seems to be related to the large difference in the values the Hamiltonian matrix diagonal takes. Maybe a nonlinear grid mapping solve it. The pyamg package also requires NumPy float128 data type, which only seems to be supported on Linux.

Sorry for my delay in answering. I'm quite busy these days.

lobpcg commented 3 years ago

@rafael-fuente Thanks! You may need to apply AMG to a shifted Hamiltonian to move its spectrum into positive values to make it work. The convergence theory requires this. I have used pyamg in MS Windows with no issues, to my recollection. In 2d and especially 1d, the Hamiltonian appears to be narrow banded for your approximations, so you can also try using the inverse to the shifted Hamiltonian via banded Cholesky factorization in float32 for preconditioning.