jcmgray / quimb

A python library for quantum information and many-body calculations including tensor networks.
http://quimb.readthedocs.io
Other
455 stars 107 forks source link

Hyper-optimized tensor network contractions for complete networks #175

Open shadibeh opened 1 year ago

shadibeh commented 1 year ago

What is your issue?

Hi everyone.

I am using quimb to optimize QAOA circuit. In the documentation, there is the example of " Bayesian Optimizing QAOA Circuit Energy" doing that for 3-reg graph which works really well.

However, when I changed it to a fully-connected graph, it needs lots of memory and it is somehow slow. I tried different optimizers such as greedy and kahypar, but it did not resolve the problem.

Would you please if you could kindly recommend me an optimizer or a kind of technique to resolve the problem.

Thanks

jcmgray commented 1 year ago

Do you mean specifically the contraction optimization is slow or the whole computation process?

All to all is just a much more challenging geometry, it has approx n times more gates and no structure to exploit, so the expectation is just that it should be much harder.

shadibeh commented 1 year ago

Thanks for the quick reply. I am going to find contraction cost for the problem, so I used the following command:

local_exp_rehs = [ circ_ex.local_expectation_rehearse(weight * ZZ, edge, optimize=opt) for edge, weight in tqdm.tqdm(list(terms.items())) ] and it requires large amount of memory to find the contraction cost for a fully connected graph of N=50 qubits. Is there any way to reduce the required memory?

Thanks

jcmgray commented 1 year ago

I'm not sure there is anything easy to do to reduce the memory.. each layer of gates will be adding several thousand tensors to the network. If you can profile the code using e.g. filprofiler or whichever tool of your choice, and identify which data structures or calls are using the most time and memory than that is useful information that I can look into for optimizing things.