jcmgray / quimb

A python library for quantum information and many-body calculations including tensor networks.
http://quimb.readthedocs.io
Other
502 stars 109 forks source link

Memory usage in contraction #25

Closed pulkin closed 5 years ago

pulkin commented 5 years ago

My laptop recently froze on contraction of some matrix product bracket:

bra = basis_1p(n, i)
ket = basis_1p(n, j)
bra.align_(mpo, ket)
(bra & mpo & ket) ^ all

bra and ket are MPS_product_state, mpo is MatrixProductOperator. Is the contraction order in this case always optimal or I need some additional manipulations?

(Not sure if it is an issue)

jcmgray commented 5 years ago

Hi, @pulkin, a few thoughts as it should definitely not be blowing up (what are your bond sizes?):

At some point I think I need to refresh the docs on contraction schemes / paths etc!

pulkin commented 5 years ago

It seems like I did not have opt_einsum installed. While this is easy to fix, I am wondering why the numpy backend is tensordot rather than einsum which is also capable of optimal contraction?

jcmgray commented 5 years ago

The normal einsum scales factorially badly for more than 2 tensors, and for 2 tensors is only good for small tensors vs the BLAS based tensordot approach (there are some edge cases like hadamard/outer products).

It does have an optimize='optimal' kwarg (which is actually based on opt_einsum and breaks the contraction down into pairwise tensordots etc. ), however finding the optimal scaling is a known I think NP-hard problem - so, slow! optimize=True likewise uses a greedy approach.

opt_einsum on the other hand can handle arbitrarily many indices, has much more advanced path-finding, reusable compiled expressions, and supports tensorflow, dask etc - basically is a fairly complete contraction library.

pulkin commented 5 years ago

I mean, given numpy is potentially capable of performing optimal contractions by simply setting the optimize argument, wouldn't it be a good idea to use it?

jcmgray commented 5 years ago

Of course you are welcome to try! opt_einsum has a more efficient version of the optimal path search that can you use simply with tn.contract(all, optimize='optimal') but you will find that beyond about 10 tensors it will start to take inordinate amounts of time.

jcmgray commented 5 years ago

In case this isn't clear, the 'optimize' functionality in numpy is directly taken from opt_einsum, see for instance https://github.com/numpy/numpy/pull/11491.

jcmgray commented 5 years ago

Closing for now @pulkin but feel free to reopen if you have other contraction/memory related issues.