jcmgray / quimb

A python library for quantum information and many-body calculations including tensor networks.
http://quimb.readthedocs.io
Other
467 stars 107 forks source link

TEBD 2D documentation #78

Closed cbwang2016 closed 3 years ago

cbwang2016 commented 3 years ago

Hello everyone, I am looking for a package for doing TEBD in 2D. I found quimb recently and the code for TEBD 2D is at here, but there isn't any documentation at http://quimb.readthedocs.io/en/latest/ Is TEBD 2D a stable functionality in quimb?

jcmgray commented 3 years ago

Hi @cbwang2016, sorry its not very clear but the TEBD2D class is inherited from by both SimpleUpdate and FullUpdate - details for using those here https://quimb.readthedocs.io/en/latest/tensor-2d.html#Specifying-2D-Hamiltonians. These are of course both imaginary time algorithms by default. The term TEBD is used here maybe slightly imprecisely to mean TN evolution with trotterized local hamiltonians and compression.

What is your use case? Since there is no canonical form in 2D there's no canonical TEBD algorithm, one has to choose some form of approximate gauging and gate applying.

cbwang2016 commented 3 years ago

Thanks for your reply! Actually I want to do real time evolution, like this. Is it possible to do it using quimb?

jcmgray commented 3 years ago

Technically that should be possible with very minimal changes :

However the main issue is just that I expect the bond dimension will grow out of control very quickly since - 1) scaling with bond dimension is far more expensive with PEPS and 2) there is no canonical form to optimally truncate.

Do you have a use case in mind where you are only interested in very short time scales, or believe very little entanglement will be generated?

cbwang2016 commented 3 years ago

Thank you! For my use case, we're only interested in a small scale(5x5 sites) 2D evolution&ground state problem. Do you think 2D TEBD is a proper choice for solving a spin-1/2 5x5 sites 2D evolution problem?

jcmgray commented 3 years ago

I think 5x5 is pretty borderline - while the full vector for a 25 qubit state fits easily into memory, 2D sparse hamiltonian are still quite dense and might a fair bit more memory to build, (e.g. more than 16GB and memory and a bit of care constructing - hopefully using some symmetry). But then you could find the groundstate very accurately (machine precision), and evolve to fairly long times without worrying about entanglement:

import quimb as qu

Lx = 5
Ly = 4

psi0 = qu.rand_ket(2**(Lx * Ly))
ham = qu.ham_heis_2D(Lx, Ly, sparse=True)
gs = qu.groundstate(ham)
evo = qu.Evolution(psi0, ham, progbar=True)
evo.update_to(20)
# 100%|##########| 100/100 [01:03<00:00,  1.59%/s]

and in case you haven't seen them:

quimb isn't super memory optimized for scaling those calculations, you might want to check other libraries like https://github.com/weinbe58/QuSpin as well

On the other hand, depending on the model (nearest neighbor?) PEPS will be able to find pretty accurate groundstates quite easily at these sizes, with a much lower memory requirements. You might be able to do very short real time evolutions, I don't have a feel for that. The main advantage would just be scaling beyond 5x5.

cbwang2016 commented 3 years ago

You're right, integrating the complex ode might be the best method for my case. It should be much better than PEPS.

I'm also curious, is it possible to make the ode solver parallel?

My idea is to construct the hamiltonian with scipy.sparse.linalg.LinearOperator, and in the LinearOperator I can apply the one-site terms and the two-site terms in parallel, then sum up all of them. Do you think this can accelerate the ode solver?

jcmgray commented 3 years ago

The default parallelization currently is achieved using threaded sparse matrix multiplication, possibly that could be sped up in other ways but I'd work out whether that is really the limiting factor before coming up with a new scheme.

Note you can also use slepc4py to do a Krylov exponentiation evolution, which allows one to use MPI parallelism and memory distribution, though there's quite a few (temperamental) moving parts to get that all working..

cbwang2016 commented 3 years ago

So, can you compare method="integrate" and method="expm" for me? For example, which one do you think is faster for a 25-sites problem? (For my case, I've got a 32-core CPU and a Tesla V100 GPU)

jcmgray commented 3 years ago

I think the integrate method is probably your best bet, it has a few advantages

The 'expm' method is single shot, so it just takes you directly from t0 -> T, I don't have a good feeling for at what point that becomes quicker than integrating but possibly if you were only interested in the state at time T it could make more sense. Mainly it just allows one to use slepc4py & MPI. I would do some tests.

cbwang2016 commented 3 years ago

After some test, it seems that the qu.Lazy class is not powerful enough if we just naively use it to integrate the ode:

Traceback (most recent call last):
  File "mpi_test.py", line 33, in <module>
    evo = qu.Evolution(psi0, H, progbar=True) 
  File "/home/l/anaconda3/lib/python3.8/site-packages/quimb/evo.py", line 401, in __init__
    self._start_integrator(ham, int_small_step)
  File "/home/l/anaconda3/lib/python3.8/site-packages/quimb/evo.py", line 537, in _start_integrator
    nrm0 = norm(H0, 'f')
  File "/home/l/anaconda3/lib/python3.8/site-packages/quimb/linalg/base_linalg.py", line 463, in norm
    return methods[(types[ntype], issparse(A))](A, **kwargs)
  File "/home/l/anaconda3/lib/python3.8/site-packages/quimb/linalg/base_linalg.py", line 419, in norm_fro_dense
    return vdot(A, A).real**0.5
  File "/home/l/anaconda3/lib/python3.8/site-packages/quimb/core.py", line 265, in realified_fn
    return realify_scalar(fn(*args, **kwargs), imag_tol=imag_tol)
  File "/home/l/anaconda3/lib/python3.8/site-packages/quimb/core.py", line 606, in vdot
    return np.vdot(a.ravel(), b.ravel())
AttributeError: 'Lazy' object has no attribute 'ravel'

code:

import quimb as qu
from mpi4py import MPI

comm = MPI.COMM_WORLD
rank, size = comm.Get_rank(), comm.Get_size()
print(f"I am worker {rank} of total {size} running main script...")

n = 14
shape = (2**n, 2**n)

H = qu.Lazy(qu.ham_heis, n=n, sparse=True, shape=shape)
psi0 = qu.rand_ket(2**n, seed=42)
evo = qu.Evolution(psi0, H, progbar=True) 
evo.update_to(5)

version:

$ python
Python 3.8.5 (default, Sep  4 2020, 07:30:14) 
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import quimb
>>> quimb.__version__
'1.3.0+246.g71f4d7f'
>>>

run:

quimb-mpi-python -s mpi_test.py

I guess, I need to handle the ownership for each MPI process manually?

jcmgray commented 3 years ago

I'm afraid the MPI / Lazy construct is only for the MPI slepc4py solvers, which for Evolution means

method='expm', expm_backend='slepc'

i.e. its not supported for ODE integration yet (although petsc /slepc does have interfaces for that).

cbwang2016 commented 3 years ago

OK, I get it. So what kind of parallel ode integrating method do you recommend?

jcmgray commented 3 years ago

Is the default method, just with parallel sparse matrix multiplication and the scipy ODE solver, not sufficient?

cbwang2016 commented 3 years ago

It seems that parallel sparse matrix multiplication is not very simple: https://stackoverflow.com/questions/16814273/how-to-parallelise-scipy-sparse-matrix-multiplication In this example, a C code snippet of used.

jcmgray commented 3 years ago

Ah sorry I wasn't very clear - parallel sparse matrix multiplication is already implemented in quimb using numba - it should be used by default.

https://github.com/jcmgray/quimb/blob/71f4d7f7f2bc4aa0c75a4d2bf160fb24303090ca/quimb/core.py#L539

cbwang2016 commented 3 years ago

Thank you so much! You've answered all my confusions!