PennyLaneAI / pennylane

PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
https://pennylane.ai
Apache License 2.0
2.27k stars 585 forks source link

Optimizing parameters of MottonenStatePreparation results in nan values #1085

Closed antalszava closed 1 day ago

antalszava commented 3 years ago

Issue description

When optimizing on certain statevectors inputted to MottonenStatePreparation, running into an error with nan values.

It seems as if some elements of the gradient were nearing 0 (e.g., at one point the error seems to have arosen when the value of two elements of the gradient went from 10-e16 to 10-e17).

Could be case based, hence not completely sure if this is due to an unexpected behaviour of the logic in MottonenStatePreparation.

Name: PennyLane
Version: 0.15.0.dev0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/XanaduAI/pennylane
Author: None
Author-email: None
License: Apache License 2.0
Location: /home/antalszava/xanadu/pennylane
Requires: numpy, scipy, networkx, autograd, toml, appdirs, semantic-version
Required-by: PennyLane-SF, pennylane-qulacs, PennyLane-Orquestra, PennyLane-Lightning, PennyLane-Honeywell, PennyLane-qiskit, PennyLane-Forest, PennyLane-Qchem
Platform info:           Linux-4.19.128-microsoft-standard-x86_64-with-glibc2.10
Python version:          3.8.5
Numpy version:           1.18.5
Scipy version:           1.4.1
Installed devices:
- strawberryfields.fock (PennyLane-SF-0.12.0)
- strawberryfields.gaussian (PennyLane-SF-0.12.0)
- strawberryfields.gbs (PennyLane-SF-0.12.0)
- strawberryfields.remote (PennyLane-SF-0.12.0)
- strawberryfields.tf (PennyLane-SF-0.12.0)
- qulacs.simulator (pennylane-qulacs-0.12.0)
- orquestra.forest (PennyLane-Orquestra-0.13.1)
- orquestra.ibmq (PennyLane-Orquestra-0.13.1)
- orquestra.qiskit (PennyLane-Orquestra-0.13.1)
- orquestra.qulacs (PennyLane-Orquestra-0.13.1)
- lightning.qubit (PennyLane-Lightning-0.13.0.dev0)
- honeywell.hqs (PennyLane-Honeywell-0.12.0.dev0)
- qiskit.aer (PennyLane-qiskit-0.14.0.dev0)
- qiskit.basicaer (PennyLane-qiskit-0.14.0.dev0)
- qiskit.ibmq (PennyLane-qiskit-0.14.0.dev0)
- forest.numpy_wavefunction (PennyLane-Forest-0.14.0)
- forest.qvm (PennyLane-Forest-0.14.0)
- forest.wavefunction (PennyLane-Forest-0.14.0)
- default.gaussian (PennyLane-0.15.0.dev0)
- default.mixed (PennyLane-0.15.0.dev0)
- default.qubit (PennyLane-0.15.0.dev0)
- default.qubit.autograd (PennyLane-0.15.0.dev0)
- default.qubit.jax (PennyLane-0.15.0.dev0)
- default.qubit.tf (PennyLane-0.15.0.dev0)
- default.tensor (PennyLane-0.15.0.dev0)
- default.tensor.tf (PennyLane-0.15.0.dev0)

Source code and tracebacks

import pennylane as qml
from pennylane import numpy as np
from pennylane.templates.state_preparations import MottonenStatePreparation

n_qubits = 2
dev = qml.device('default.qubit.autograd', wires=n_qubits)
statevector = np.array([1,1,1,1], dtype=np.float64, requires_grad=True)

@qml.qnode(dev, interface='autograd')
def circuit(state):
    MottonenStatePreparation(state, wires=list(range(dev.num_wires)))
    return qml.expval(qml.PauliZ(0))

opt = qml.GradientDescentOptimizer(0.01)
params = statevector

def cost(params):
    norm = np.sum(np.abs(params) ** 2)
    params = params/np.sqrt(norm)
    return circuit(params)

opt = qml.GradientDescentOptimizer(0.1)
steps = 500

for i in range(steps):
    statevector = opt.step(cost, statevector)

Additional information

An alternative snippet using torch and autograd.set_detect_anomaly(True) reveals RuntimeError: Function 'AbsBackward' returned nan values in its 0th output., probably hinting that abs is already getting a nan value in the backward pass.

co9olguy commented 3 years ago

This might be unavoidable if the equations defining the MottonenStatePreparation have divergences at certain parameter values?

i.e., not sure if it's a bug or working as it should :thinking:

antalszava commented 3 years ago

Yes, that could easily be the case! I'll remove the bug label for now, can be re-added if we determine that this is really a bug

josh146 commented 3 years ago

@antalszava could this is due to the derivative of abs not being defined at 0? This could be happening either in your cost function, or the mottonen state prep. If so, then we have two options:

trbromley commented 3 years ago

@DSGuala is working on a PR (https://github.com/PennyLaneAI/pennylane/pull/1144) that tweaks Mottonen when the input state is real-valued. The tweak is to remove an extra unnecessary entangling block at the end, but we may run across and potentially look at solving this issue in the PR.

dwierichs commented 2 months ago

This issue is due to non-differentiable points of the map from state vectors to rotation angles in MottonenStatePreparation, as suggested by @co9olguy. For example, the code above hits the derivative of MottonenStatePreparation at state=[0,0,1/np.sqrt(2),1/np.sqrt(2)], which is not defined in the current form.

We are aware of this in PennyLane and warn users about non-differentiable points in the documentation. Therefore I'm wondering whether it would make sense to close this issue?