jcmgray / quimb

A python library for quantum information and many-body calculations including tensor networks.
http://quimb.readthedocs.io
Other
484 stars 108 forks source link

Bugfix: SVD fallback exception from numba compiled function #238

Closed mlxd closed 3 months ago

mlxd commented 3 months ago

Details: PennyLane has a new device (default.tensor) which can use quimb under the hood for the MPS and TN implementations. Currently, we observed a failure mode with a given circuit with the MPS backend:

import pennylane as qml

wires=62
layers=4
dev = qml.device("default.tensor", wires=wires)

@qml.qnode(dev)
def circuit():
    for l in range(layers):
        for i in range(wires):
            qml.RX(0.1234,i)
            qml.CNOT(wires=[i,(i+1)%wires])

    return qml.expval(qml.PauliZ(0))

print(circuit())

For the above, and when setting OMP_NUM_THREADS to any value except that exact physical cores, we see the GESDD SVD method fail to converge (a known issue for others with LAPACK GESDD calls). This should be caught by the quimb/tensor/decomp.py::svd_truncated_numpy function, which checks for a numpy-raised LinAlgError, and if so falls back to the GESVD implementation. However, when numba compiles the svd_truncated_numba function, the raised error is no longer a LinAlgError, but the more general ValueError, causing the fallback to fail.

This PR replaces the LinAlgError check with a ValueError check, which should catch both cases due to the error hierarchy (https://numpy.org/doc/stable/reference/generated/numpy.linalg.LinAlgError.html).


With the above change, we are successfully able to run our given circuit. If keeping both error types is preferred, I'll be happy to readd the LinAlgError type back in.

jcmgray commented 3 months ago

Thanks for the PR, Looks good to me!

Seems like the underlying bug is a bit of a problem though - is it only occasionally?

mlxd commented 3 months ago

Thanks for the PR, Looks good to me!

Seems like the underlying bug is a bit of a problem though - is it only occasionally?

Based on a search today, I've read the convergence issue with threads appear across other users with OpenBLAS, even prompting some Julia wrapper packages (and outright exclusion of GESDD by some), as well as a variety of other such cases. I think it just crops up enough to be noticeable with the decomposition used by GESDD in practice --- for the case we encountered it was only appearing using some matrices having very high condition numbers. I think having the fallback as structured (ie fast-path through numba, fallback to GESVD in scipy) is likely the best option.

Otherwise, not sure if there's something that can be recommended on the OpenBLAS side, since this is mentioned elsewhere. I'll try gather some info on whether it is just x86_64 (how I hit the issue), whether MacOS M1+ is affected, and whether conda-shipped numpy (with MKL) is affected too.

PietropaoloFrisoni commented 3 months ago

Thanks @jcmgray for merging this PR so quickly!

As @mlxd mentioned, we are developing a new PennyLane device based on quimb (thanks for developing such a great package). I guess this change will end up in quimb 1.8.2. If that's the case, when do you plan to release such a version? I ask since we plan to release a new version of PennyLane with this new device in a few weeks. Thanks!

jcmgray commented 3 months ago

I can mint a release soon! My only hesitation is currently not all tests are passing due to some cryptic 'fatal exception' on the windows github CI, but I don't think its something related to quimb.

mlxd commented 3 months ago

Thanks @jcmgray We've had similar issue in the past, and the usual culprit is some conflicting threading mode pulled in by dependencies with OpenMP libraries, where dual calls to the initialisation routine caused failures. In this instance, it looks like intel-openmp is being pulled into your conda/mamba env, which may be a cause for this, assuming another package requires it (or a different version of OpenMP library altogether).

I may be completely wrong, but it could be worth trying to set the env var KMP_DUPLICATE_LIB_OK=True, which would allow the dual initialization failure mode to at least be mitigated, if that is the root cause: https://github.com/explosion/spaCy/issues/8366 If not that, then maybe trying to remove MKL/Intel OpenMP may be worth a try. Otherwise, creating a separate virtualenv and pip installing the PyPI packages (which should have different dependent libs than the conda ones) could be another avenue, at the expense of a less general CI setup.

jcmgray commented 3 months ago

Thanks for the tips @mlxd, I'll give those a try.

jcmgray commented 3 months ago

Moving the windows CI to openblas indeed seems to have fixed it.