PennyLaneAI / pennylane

PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
https://pennylane.ai
Apache License 2.0
2.27k stars 586 forks source link

[BUG] Classical jacobian is computed for non-trainable parameters too with Torch #2018

Closed antalszava closed 2 years ago

antalszava commented 2 years ago

Expected behavior

Using qml.transforms.classical_jacobian by passing argnum=None with Torch interface computes the classical jacobian wrt. the trainable parameters.

import pennylane as qml
from pennylane import numpy as np
import torch

dev = qml.device('default.qubit', wires=3)

@qml.qnode(dev, interface='torch')
def circuit(x, y, z, a):
    qml.RX(qml.math.sin(x), wires=0)
    qml.CNOT(wires=[0, 1])
    qml.RY(y ** 2, wires=1)
    qml.RZ(1 / z, wires=1)
    qml.RX(3*a, wires= 0)
    return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1))

jac_fn = qml.transforms.classical_jacobian(circuit, argnum=None)

x = torch.tensor(0.1, requires_grad=True)
y = torch.tensor(-2.5, requires_grad=True)
z = torch.tensor(0.71, requires_grad=True)

a = torch.tensor(0.1, requires_grad=False)
jac_fn(x, y, z, a)

Actual behavior

We get results wrt. all parameters (non-trainable included):

(tensor([0.9950, 0.0000, 0.0000, 0.0000]),
 tensor([-0., -5., -0., -0.]),
 tensor([-0.0000, -0.0000, -1.9837, -0.0000]),
 tensor([0., 0., 0., 3.]))

Additional information

The same snippet using autograd computes the classical jacobian only wrt. trainable params:

import pennylane as qml
from pennylane import numpy as np
import torch

dev = qml.device('default.qubit', wires=3)

@qml.qnode(dev, interface='autograd')
def circuit(x, y, z, a):
    qml.RX(qml.math.sin(x), wires=0)
    qml.CNOT(wires=[0, 1])
    qml.RY(y ** 2, wires=1)
    qml.RZ(1 / z, wires=1)
    qml.RX(3*a, wires= 0)
    return qml.expval(qml.PauliZ(0) @ qml.PauliZ(1))

jac_fn = qml.transforms.classical_jacobian(circuit, argnum=None)

x = np.array(0.1, requires_grad=True)
y = np.array(-2.5, requires_grad=True)
z = np.array(0.71, requires_grad=True)

a = np.array(0.1, requires_grad=False)
jac_fn(x, y, z, a)
tensor([[ 0.99500417, -0.        , -0.        ],
        [ 0.        , -5.        , -0.        ],
        [ 0.        , -0.        , -1.98373339]], requires_grad=True)

Source code

No response

Tracebacks

No response

System information

Name: PennyLane
Version: 0.21.0.dev0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/XanaduAI/pennylane
Author: 
Author-email: 
License: Apache License 2.0
Location: /home/antal/xanadu/pennylane
Requires: numpy, scipy, networkx, autograd, toml, appdirs, semantic_version, autoray, cachetools, pennylane-lightning
Required-by: PennyLane-Cirq, PennyLane-Orquestra, pennylane-qulacs, amazon-braket-pennylane-plugin, PennyLane-Honeywell, PennyLane-qiskit, PennyLane-AQT, PennyLane-PQ, PennyLane-Forest, PennyLane-qsharp, PennyLane-Lightning, PennyLane-Qchem, PennyLane-IonQ, PennyLane-SF
Platform info:           Linux-5.11.0-41-generic-x86_64-with-glibc2.10
Python version:          3.8.5
Numpy version:           1.21.4
Scipy version:           1.7.3
Installed devices:
- cirq.mixedsimulator (PennyLane-Cirq-0.19.0)
- cirq.pasqal (PennyLane-Cirq-0.19.0)
- cirq.qsim (PennyLane-Cirq-0.19.0)
- cirq.qsimh (PennyLane-Cirq-0.19.0)
- cirq.simulator (PennyLane-Cirq-0.19.0)
- orquestra.forest (PennyLane-Orquestra-0.15.0)
- orquestra.ibmq (PennyLane-Orquestra-0.15.0)
- orquestra.qiskit (PennyLane-Orquestra-0.15.0)
- orquestra.qulacs (PennyLane-Orquestra-0.15.0)
- qulacs.simulator (pennylane-qulacs-0.17.0.dev0)
- braket.aws.qubit (amazon-braket-pennylane-plugin-1.4.1.dev0)
- braket.local.qubit (amazon-braket-pennylane-plugin-1.4.1.dev0)
- honeywell.hqs (PennyLane-Honeywell-0.16.0.dev0)
- qiskit.aer (PennyLane-qiskit-0.18.0.dev0)
- qiskit.basicaer (PennyLane-qiskit-0.18.0.dev0)
- qiskit.ibmq (PennyLane-qiskit-0.18.0.dev0)
- aqt.noisy_sim (PennyLane-AQT-0.18.0)
- aqt.sim (PennyLane-AQT-0.18.0)
- projectq.classical (PennyLane-PQ-0.18.0.dev0)
- projectq.ibm (PennyLane-PQ-0.18.0.dev0)
- projectq.simulator (PennyLane-PQ-0.18.0.dev0)
- forest.numpy_wavefunction (PennyLane-Forest-0.18.0.dev0)
- forest.qvm (PennyLane-Forest-0.18.0.dev0)
- forest.wavefunction (PennyLane-Forest-0.18.0.dev0)
- microsoft.QuantumSimulator (PennyLane-qsharp-0.19.0)
- lightning.qubit (PennyLane-Lightning-0.20.0.dev0)
- ionq.qpu (PennyLane-IonQ-0.17.0.dev0)
- ionq.simulator (PennyLane-IonQ-0.17.0.dev0)
- default.gaussian (PennyLane-0.21.0.dev0)
- default.mixed (PennyLane-0.21.0.dev0)
- default.qubit (PennyLane-0.21.0.dev0)
- default.qubit.autograd (PennyLane-0.21.0.dev0)
- default.qubit.jax (PennyLane-0.21.0.dev0)
- default.qubit.tf (PennyLane-0.21.0.dev0)
- default.qubit.torch (PennyLane-0.21.0.dev0)
- strawberryfields.fock (PennyLane-SF-0.20.0.dev0)
- strawberryfields.gaussian (PennyLane-SF-0.20.0.dev0)
- strawberryfields.gbs (PennyLane-SF-0.20.0.dev0)
- strawberryfields.remote (PennyLane-SF-0.20.0.dev0)
- strawberryfields.tf (PennyLane-SF-0.20.0.dev0)


### 

- [X] I have searched existing GitHub issues to make sure the issue does not already exist.
josh146 commented 2 years ago

Note that this is related to #1991

dwierichs commented 2 years ago

This is a somewhat ridiculous bug: In classical_jacobian, the QNode needs to be constructed within the classical_preprocessing function that is going to be differentiated, in order to create the tape and call get_parameters. However, when calling Torch's jacobian on that function, all passed args are understood as trainable! This means that the tape, unlike when being created with the QNode arguments (above x, y, z, a) are all trainable; By printing the passed arguments within the QNode, we get the output:

>>> circuit(x, y, z, a)
tensor(0.1000, requires_grad=True) tensor(-2.5000, requires_grad=True) tensor(0.7100, requires_grad=True) tensor(0.1000)
>>> jac_fn(x, y, z, a)
tensor(0.1000, requires_grad=True) tensor(-2.5000, requires_grad=True) tensor(0.7100, requires_grad=True) tensor(0.1000, requires_grad=True)

That is, Torch activated requires_grad for a because a was passed as an argument to torch.autograd.functional.jacobian.

Good news: There is an easy fix to this. As we anyways allow for argnum in classical_jacobian, we can simply set argnum to those argument indices that belong to trainable parameters, via qml.math.get_trainable_indices.