Closed jpmoutinho closed 1 month ago
@vytautas-a like we discussed, I suspect a first implementation of this may only require dealing with the HamEvo
class in qadence.operations
, but I'm not sure about a clean way to do it. The digital_decomposition
method is relevant.
The HamEvo
instantiates a TimeEvolutionBlock
which then is handled by the convert_ops.py
in the pyqtorch
backend (also horqrux
backend), so all the code needed should be in these files.
HamEvo checks if the qubit hamiltonian is made of commuting terms and generates a gate representation of it from what I can see. So why does it take different execution times if the generator is specified as one chunk or separate commuting chunks when both have identical digital decomposition to be implemented in the end
HamEvo checks if the qubit hamiltonian is made of commuting terms and generates a gate representation of it from what I can see. So why does it take different execution times if the generator is specified as one chunk or separate commuting chunks when both have identical digital decomposition to be implemented in the end
Actually it currently only does that if you call HamEvo.digital_decomposition()
. Otherwise it builds the full matrix for the Hamiltonian and then exponentiates it, which is why it's faster in my example: there it is exponentiating two smaller matrices instead of a large one.
But the suggestion here is exactly to make the commutation check be done automatically and optimize based on that.
Cool, understood. I would like to add a few points with designing the algorithm that I thought about
Found a paper about distributing a pauli decomposition into commuting set of collections, https://arxiv.org/pdf/1908.06942.pdf
Apparently there is a lot of work on this mostly in the direction of doing efficient observable estimation with minimal shots, but can be used here as well, i think
Nice @rajaiitp ! Seems relevant yes.
Also came accross this one: https://arxiv.org/pdf/1907.09040.pdf
Closing after opening in PyQTorch: https://github.com/pasqal-io/pyqtorch/issues/177
Currently HamEvo of some generator is always exponentiated fully, without checking for commutation relations. However, generators composed of several commuting parts can be exponentiated separately. We can make some of those checks automatic in the instantiation of HamEvo and then optimize the calculation in the backend. Below is an example script doing it manually to showcase the potential:
To implement this, there is already a
block_is_commuting_hamiltonian
function inqadence.blocks.utils
that would be useful. It seem efficient, but should be reviewed. This function just returns True or False, so a first implementation could be based on that: essentially, ifTrue
, every term in theAddBlock
is separated into its own matrix to be exponentiated.A next level would be to look for optimizations even if the function returns False, where each non-commuting term would be aggregated into its own group. However I suspect this would not be straightforward.
Related to https://github.com/pasqal-io/qadence/issues/134 since this would reduce to calling block to tensor on each of the smaller commuting terms.