PennyLaneAI / pennylane

PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
https://pennylane.ai
Apache License 2.0
2.38k stars 606 forks source link

Broadcast Expand Prevents Torchlayer from detecting Inputs Argument #5843

Closed waelitani closed 5 months ago

waelitani commented 5 months ago

Expected behavior

The torchlayer should be created successfully without any errors. This works fine if the broadcast expand function decorator "@qml.transforms.broadcast_expand" is commented out from the minimal code provided.

Actual behavior

Torchlayer fails to detect the inputs argument to the circuit function under the qnode and broadcast expand decorators.

Additional information

The versions of the relevant packages are as below:

Name Version Build Channel

pennylane 0.36.0 pypi_0 pypi python 3.11.9 h955ad1f_0 torch 2.3.1+cu121 pypi_0 pypi

Source code

import pennylane as qml

dev = qml.device("default.qubit", wires = 1)

@qml.qnode(dev)
@qml.transforms.broadcast_expand
def circuit(inputs):
    return qml.probs(wires = 1)

qlayer = qml.qnn.TorchLayer(circuit, {})

Tracebacks

line 10, in <module>
    qlayer = qml.qnn.TorchLayer(circuit, {})
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pennylane/qnn/torch.py", line 351, in __init__
    self._signature_validation(qnode, weight_shapes)
pennylane/qnn/torch.py", line 364, in _signature_validation
    raise TypeError(
TypeError: QNode must include an argument with name inputs for inputting data

System information

Name: PennyLane
Version: 0.36.0
Summary: PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
Home-page: https://github.com/PennyLaneAI/pennylane
Author:
Author-email:
License: Apache License 2.0
Requires: appdirs, autograd, autoray, cachetools, networkx, numpy, pennylane-lightning, requests, rustworkx, scipy, semantic-version, toml, typing-extensions
Required-by: PennyLane-Catalyst, pennylane-qulacs, PennyLane_Lightning, PennyLane_Lightning_GPU

Platform info:           Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Python version:          3.11.9
Numpy version:           1.26.4
Scipy version:           1.13.1
Installed devices:
- lightning.qubit (PennyLane_Lightning-0.36.0)
- qulacs.simulator (pennylane-qulacs-0.36.0)
- nvidia.custatevec (PennyLane-Catalyst-0.6.0)
- nvidia.cutensornet (PennyLane-Catalyst-0.6.0)
- oqc.cloud (PennyLane-Catalyst-0.6.0)
- softwareq.qpp (PennyLane-Catalyst-0.6.0)
- default.clifford (PennyLane-0.36.0)
- default.gaussian (PennyLane-0.36.0)
- default.mixed (PennyLane-0.36.0)
- default.qubit (PennyLane-0.36.0)
- default.qubit.autograd (PennyLane-0.36.0)
- default.qubit.jax (PennyLane-0.36.0)
- default.qubit.legacy (PennyLane-0.36.0)
- default.qubit.tf (PennyLane-0.36.0)
- default.qubit.torch (PennyLane-0.36.0)
- default.qutrit (PennyLane-0.36.0)
- default.qutrit.mixed (PennyLane-0.36.0)
- null.qubit (PennyLane-0.36.0)
- lightning.gpu (PennyLane_Lightning_GPU-0.36.0)

Existing GitHub issues

CatalinaAlbornoz commented 5 months ago

Hi @waelitani ,

The order in which you add the decorators is very important. The @qml.qnode(dev) decorator should always be placed immediately above your quantum function. Then you can add other transforms above the QNode decorator. I tried it and it fixed the issue. let me know if it works for you!

waelitani commented 5 months ago

Placing the the qml.qnode decorator closest to the function definition seems to render the transform decorator ineffective. For example, adding @qml.transforms.broadcast_expand above it, would still throw the error:

ValueError: Broadcasting with MottonenStatePreparation is not supported. Please use the qml.transforms.broadcast_expand transform to use broadcasting with MottonenStatePreparation.

when using, say AmplitudeEmbedding, with the the lightning.qubit device.

I am not certain if this would be fixed by preserving the call signature, as fixed by 5857.

albi3ro commented 5 months ago

This would tie into Issue #4460, which is a separate long standing issue we are needing to solve.

This may also tie into the fact we do gradient preprocessing before user transforms. I'll look into that.