_Right now, we call block_to_tensor in every forward pass to get the hamiltonian, which is then exponentiated in native pyq. lets find a way to not have to call block_totensor but rather "fill in" parameter values in a smarter way.
Actually doing that for parametric Hamiltonians will be hard. However, we should avoid doing it for non-parametric Hamiltonians. Those should only be tensorized once and then cached for repeated evaluations.
Closing as one possible solution has been merged in #205. There are possibly other avenues to explore, but nothing with a concrete plan, so we can open new issues for that in the future if needed.
Original text from @dominikandreasseitz :
_Right now, we call block_to_tensor in every forward pass to get the hamiltonian, which is then exponentiated in native pyq. lets find a way to not have to call block_totensor but rather "fill in" parameter values in a smarter way.
Actually doing that for parametric Hamiltonians will be hard. However, we should avoid doing it for non-parametric Hamiltonians. Those should only be tensorized once and then cached for repeated evaluations.
Related to https://github.com/pasqal-io/qadence/issues/134