PennyLaneAI / pennylane

PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
https://pennylane.ai
Apache License 2.0
2.34k stars 598 forks source link

`qml.utils.unflatten` does not preserve `requires_grad` flag #966

Open albi3ro opened 3 years ago

albi3ro commented 3 years ago

qml.utils.unflatten(flat, model) takes a flattened object flat and returns it to the shape of model.

But currently, qml.utils.unflatten does not preserve the requires_grad flag of a PennyLane NumPy tensor. All NumPy arrays get returned to as a standard non-Pennylane NumPy array.

Many places in Pennylane use this function, so any change will cascade throughout the repository.

So solving this logical inconsistency will take:

Bonus idea: If the model array is a default NumPy array, then the function should return a default NumPy array, but if the model array is a Pennylane tensor inherited from ndarray, then the function should return a Pennylane array.

Example code:

import pennylane as qml
from pennylane import numpy as np

x = np.array([1.0], requires_grad=False)

x_flat = list(qml.utils._flatten(x))

x_new = qml.utils.unflatten(x, x_flat)
josh146 commented 3 years ago

Thanks for catching this @albi3ro

If the model array is a default NumPy array, then the function should return a default NumPy array, but if the model array is a Pennylane tensor inherited from ndarray, then the function should return a Pennylane array.

:+1:

kessler-frost commented 3 years ago

Hi, I saw that no one is working on this so is it fine if I take a stab at this issue?

josh146 commented 3 years ago

Hi @kessler-frost! You're welcome to give it a shot (be warned however that I don't think this is a very straightforward or easy issue to fix).

If you're instead looking for a good first issue, have a look for any issue with a good-first-issue tag.

kessler-frost commented 3 years ago

I must've missed that tag. Thanks! I will try one of those and see if I can solve them.

mariaschuld commented 3 years ago

@albi3ro I was looking at the issue and how much time it would be to fix it, but then I thought it would be much more elegant to move this functionality to the new qml.math module and implement it in all interfaces. This would be consistent since it is classical processing on a tensor. What do you think?

How high is the priority of this?

josh146 commented 3 years ago

I would say that this is not that high a priority, since it only affects the optimizers, and they already have a hotfix.

albi3ro commented 1 year ago

Mentioning here we should just try to get rid of flatten and unflatten somehow. I think it would be doable. They are only bring used in QNGOptimizer, rms prop, and rotoselect.