Open antalszava opened 3 years ago
Hey @antalszava to answer your question --- the output differentiability should ideally depend on two things:
If a QNode has differentiable input, and the QNode has a gradient method, the output should have requires_grad=True
🙂
However, if all input to a QNode is requires_grad=False
, the output should always be requires_grad=False
. Similarly, if the QNode is not differentiable (diff_method=None
), the output should also be non-differentiable.
You can compare to standard NumPy functions:
>>> x = np.array(0.1, requires_grad=True)
>>> y = np.array(0.2, requires_grad=True)
>>> np.add(x, y) # both inputs require grad
tensor(0.3, requires_grad=True)
>>> x.requires_grad = False
>>> np.add(x, y) # one input requires grad
tensor(0.3, requires_grad=True)
>>> y.requires_grad = False
>>> np.add(x, y) # no inputs require grad
tensor(0.3, requires_grad=False)
Oh, that's a great cue to keep in mind, thanks @josh146! :slightly_smiling_face:
I think we have issues for the following cases then:
However, if all input to a QNode is requires_grad=False, the output should always be requires_grad=False.
A QNode with no trainable parameters will have an output with requires_grad=True
when the beta QNode is in place:
import pennylane as qml
from pennylane.devices import DefaultQubit
dev = qml.device('lightning.qubit', wires=1)
@qml.beta.qnode(dev)
def circuit():
return qml.expval(qml.PauliZ(0))
assert circuit().requires_grad
For now, this issue comes up only with default.qubit.autograd
:
import pennylane as qml
from pennylane.devices import DefaultQubit
dev = qml.device('default.qubit.autograd', wires=1)
@qml.qnode(dev)
def circuit():
return qml.expval(qml.PauliZ(0))
assert circuit().requires_grad
Not with other devices:
import pennylane as qml
from pennylane.devices import DefaultQubit
dev = qml.device('lightning.qubit', wires=1)
@qml.qnode(dev)
def circuit():
return qml.expval(qml.PauliZ(0))
assert not circuit().requires_grad
For now, this issue comes up only with default.qubit.autograd:
Good catch. This must be because of something to do with backpropagation 🤔
One more question @antalszava, is this bug causing another bug? Just curious about priority in fixing this.
No, it was just something that we've stumbled upon with Catalina when playing around with PennyLane. Likely it's a matter of standardizing this behaviour.
At the moment we have it on the radar, but not considering it as a high priority bug.
Good to know 😅 yep if it is not blocking anything or causing any other bugs, not too high priority for now
Just to confirm that I understand the problem, it seems that the requires_grad
attribute of the tensor returned by these Qnodes seems to be inconsistent.
Ideally:
The output of such a qnode should only be differentiable (i.e requires_grad=True
) if: any(input.requires_grad for input in inputs) and Qnode.diff_method is not None
?
Currently:
requires_grad=True
) default.qubit.autograd
device qnode always has (requires_grad=True
)We want to change that behaviour to match the ideal case. Is that correct?
Yep! You're right to separate this into two separate use-cases, since backprop
mode and parameter-shift
/adjoint
mode go through two very different logical pathways.
Note: the reason it works for the old QNode in parameter-shift mode is that we are explicitly taking this into account: https://github.com/PennyLaneAI/pennylane/blob/c81a0389257a73696d90aa108d930e6c0d607a17/pennylane/interfaces/autograd.py#L174
I get the feeling this issue doesn't really apply anymore, but if it does, it might be related to #4350 ?
Expected behavior
The output of circuit execution is a tensor that has the
requires_grad
attribute set toTrue
orFalse
for all devices.Most devices output a tensor with
requires_grad=False
:Actual behavior
The
default.qubit.autograd
device outputs a tensor withrequires_grad=False
:With a tape:
Having
requires_grad=True
for the output seems to be unified for all devices when using the beta QNode:Additional information
The issue to tackle would be to have a standardized output. Switching to the beta QNode seems to solve this, however, I'm not sure if having
requires_grad=True
for all outputs was intentional behaviour. If so, then this issue can be simply closed. :slightly_smiling_face:Source code
No response
Tracebacks
No response
System information