PennyLaneAI / pennylane

PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
https://pennylane.ai
Apache License 2.0
2.18k stars 570 forks source link

[BUG] `Sum.terms()` doesn't re-package the coefficients #5507

Open KetpuntoG opened 2 months ago

KetpuntoG commented 2 months ago

Expected behavior

(after discussion with @Qottmann )

The problem is that Sum.terms() doesn't neatly re-package the coefficients but just puts them in a list. This is a problem since Sum and LinearCombination have different behaviors (causing me problems with differentiability). The expected behavior would be:

coeffs = pnp.array([0.5], requires_grad=True)

op = qml.dot(coeffs, [X(0)])
op.terms()[0]

tensor([0.5], requires_grad=True)

Actual behavior

coeffs = pnp.array([0.5], requires_grad=True)

op = qml.dot(coeffs, [X(0)])
op.terms()[0]

[tensor(0.5, requires_grad=True)]

Additional information

No response

Source code

No response

Tracebacks

No response

System information

Name: PennyLane
Version: 0.36.0.dev0
Summary: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
Home-page: https://github.com/PennyLaneAI/pennylane
Author: 
Author-email: 
License: Apache License 2.0
Location: /usr/local/lib/python3.10/dist-packages
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane_Lightning

Platform info:           Linux-6.1.58+-x86_64-with-glibc2.35
Python version:          3.10.12
Numpy version:           1.25.2
Scipy version:           1.11.4
Installed devices:
- lightning.qubit (PennyLane_Lightning-0.35.1)
- default.clifford (PennyLane-0.36.0.dev0)
- default.gaussian (PennyLane-0.36.0.dev0)
- default.mixed (PennyLane-0.36.0.dev0)
- default.qubit (PennyLane-0.36.0.dev0)
- default.qubit.autograd (PennyLane-0.36.0.dev0)
- default.qubit.jax (PennyLane-0.36.0.dev0)
- default.qubit.legacy (PennyLane-0.36.0.dev0)
- default.qubit.tf (PennyLane-0.36.0.dev0)
- default.qubit.torch (PennyLane-0.36.0.dev0)
- default.qutrit (PennyLane-0.36.0.dev0)
- null.qubit (PennyLane-0.36.0.dev0)

Existing GitHub issues

Qottmann commented 2 months ago

Would have to do an appropriate re-packaging in https://github.com/PennyLaneAI/pennylane/blob/master/pennylane/ops/op_math/sum.py#L445 and https://github.com/PennyLaneAI/pennylane/blob/master/pennylane/ops/op_math/sum.py#L459 (same for Prod).

I actually dont understand how to do that with qml.math, as I keep "losing" the requires_grad attribute 🤔

>>> coeffs = [pnp.array(0.5, requires_grad=True)]
>>> coeffs_new = qml.math.array(coeffs)
>>> coeffs, coeffs_new
([tensor(0.5, requires_grad=True)], array([0.5]))

(same for asarray

albi3ro commented 2 months ago

Also, what happens if one coefficient requires grad and the other doesn't?

KetpuntoG commented 2 months ago

Also, what happens if one coefficient requires grad and the other doesn't?

hmm I am not sure which one should be the correct one but the important thing is that it is common in both cases I would say

astralcai commented 2 months ago

Actually this produces an SProd, not a Sum

coeffs = pnp.array([0.5], requires_grad=True)
op = qml.dot(coeffs, [X(0)])
astralcai commented 2 months ago

@KetpuntoG Would you be able to provide more context regarding when this causes an issue?

KetpuntoG commented 2 months ago

I'm creating the template qml.Qubitization that takes as input a Hamiltonian. In this line I wrote:

coeffs, ops = hamiltonian.terms()

If the Hamiltonian is a LinearCombination, this will be differentiable because coeffs = np.array([......], requires_grad = True). However, if I use qml.dot to define the Hamiltonian, this will not be differentiable because coeffs = [np.array(, True), np.array(, True),....] that I cannot put together again to make it differentiable. (We tried hstackbut it doesn't work). Maybe is related with this?

astralcai commented 2 months ago

I'm creating the template qml.Qubitization that takes as input a Hamiltonian. In this line I wrote:

coeffs, ops = hamiltonian.terms()

If the Hamiltonian is a LinearCombination, this will be differentiable because coeffs = np.array([......], requires_grad = True). However, if I use qml.dot to define the Hamiltonian, this will not be differentiable because coeffs = [np.array(, True), np.array(, True),....] that I cannot put together again to make it differentiable. (We tried hstackbut it doesn't work). Maybe is related with this?

Thank you for the context! I'll look into this.

astralcai commented 2 months ago

I'm creating the template qml.Qubitization that takes as input a Hamiltonian. In this line I wrote:

coeffs, ops = hamiltonian.terms()

If the Hamiltonian is a LinearCombination, this will be differentiable because coeffs = np.array([......], requires_grad = True). However, if I use qml.dot to define the Hamiltonian, this will not be differentiable because coeffs = [np.array(, True), np.array(, True),....] that I cannot put together again to make it differentiable. (We tried hstackbut it doesn't work). Maybe is related with this?

If the problem with hstack is that it loses requires_grad, would something like

qml.math.convert_like(qml.math.hstack(coeffs), coeffs[0])

fix the issue?

Qottmann commented 2 months ago
qml.math.convert_like(qml.math.hstack(coeffs), coeffs[0])

Nice! This is what I was looking for! Thanks! @KetpuntoG you can use that in your code when you call terms to re-package the coeffs, but we should add that as well to Prod.terms(), Sum.terms() as well as SProd.terms().

astralcai commented 2 months ago

Nice! This is what I was looking for! Thanks! @KetpuntoG you can use that in your code when you call terms to re-package the coeffs, but we should add that as well to Prod.terms(), Sum.terms() as well as SProd.terms().

In this case terms() will now always return an array instead of a list for the coefficients. Is that expected?

KetpuntoG commented 2 months ago

I could not get it to work. We can try to fix it once the Sum has better grad support :)