Closed antalszava closed 2 years ago
Note that this is related to #1991
This is a somewhat ridiculous bug:
In classical_jacobian
, the QNode needs to be constructed within the classical_preprocessing
function that is going to be differentiated, in order to create the tape and call get_parameters
. However, when calling Torch's jacobian
on that function,
all passed args
are understood as trainable! This means that the tape, unlike when being created with the QNode arguments (above x, y, z, a
) are all trainable; By print
ing the passed arguments within the QNode, we get the output:
>>> circuit(x, y, z, a)
tensor(0.1000, requires_grad=True) tensor(-2.5000, requires_grad=True) tensor(0.7100, requires_grad=True) tensor(0.1000)
>>> jac_fn(x, y, z, a)
tensor(0.1000, requires_grad=True) tensor(-2.5000, requires_grad=True) tensor(0.7100, requires_grad=True) tensor(0.1000, requires_grad=True)
That is, Torch activated requires_grad
for a
because a
was passed as an argument to torch.autograd.functional.jacobian
.
Good news: There is an easy fix to this. As we anyways allow for argnum
in classical_jacobian
, we can simply set argnum
to those argument indices that belong to trainable parameters, via qml.math.get_trainable_indices
.
Expected behavior
Using
qml.transforms.classical_jacobian
by passingargnum=None
with Torch interface computes the classical jacobian wrt. the trainable parameters.Actual behavior
We get results wrt. all parameters (non-trainable included):
Additional information
The same snippet using
autograd
computes the classical jacobian only wrt. trainable params:Source code
No response
Tracebacks
No response
System information