jcmgray / cotengra

Hyper optimized contraction trees for large tensor networks and einsums
https://cotengra.readthedocs.io
Apache License 2.0
174 stars 32 forks source link

Best practice on running cotengra via the opt_einsum API #23

Open rht opened 1 year ago

rht commented 1 year ago

In https://github.com/dgasmith/opt_einsum/issues/217#issuecomment-1620525115, @jcmgray stated that running cotengra optimization via oe.contract_path(expression, *operands, optimize=opt) (where opt is a cotengra optimizer) is slower than doing it via quimb. I have to add more detail that the path finding part of the opt_einsum method alone is much slower than the entirety of the run via quimb. As such, the reasoning in that comment applies only to the contraction phase.

What is the recommended way to do the path finding via opt_einsum, that is performant? The main use case is that most circuits are written in Qiskit/Cirq, and with cuQuantum's CircuitToEinsum, it enables one to do contraction of any Qiskit/Cirq circuits.

jcmgray commented 1 year ago

Hi @rht, to be clear cotengra and opt_einsum offer different optimizers that will take different amounts of time. But the same whether you call them via opt_einsum or quimb. However,

1 cotengra has more advanced optimizers, that can find better paths for large contractions.

  1. For a given path, actually performing the contraction with cotengra can also be much faster as it uses batch matrix multiplication for hpyer indices, (but they will be similar for non-hyper contractions)

If you have the einsum equation and arrays, you can use contegra or quimb directly.

E.g.:

expr = ctg.contraction_expression(eq, *shapes, optimize=opt)
out = expr(*arrays)