Closed neak11 closed 1 year ago
Hi @neak11 , thank you for opening this issue!
Hi @neak11, thanks for reporting this! I actually noticed something very similar a while back in #4460, but this bug report is highlighting a key component to the issue! I'll open a bug-fix to internally convert an axis value of -1 to the actual ndim count, and that should fix some things.
Unfortunately, there will still be more work needed to close #4460 as broadcasting support for MottonenStatePreparation
is not trivial. For example, with my torch fix patched in, I'll get the decomposition for $|10\rangle$ and $|11\rangle$ without broadcasting, respectively:
>>> data = [[0, 0, 1, 0], [0, 0, 0, 1]]
>>> for d in data:
... print(qml.MottonenStatePreparation.compute_decomposition(d, wires=[0, 1]))
[RY(array(3.14159265), wires=[0]), CNOT(wires=[0, 1]), CNOT(wires=[0, 1])]
[RY(array(3.14159265), wires=[0]), RY(array(1.57079633), wires=[1]), CNOT(wires=[0, 1]), RY(array(-1.57079633), wires=[1]), CNOT(wires=[0, 1])]
As you can see, they are not the same set of operations. This makes broadcasting more complicated, so we get the wrong result:
>>> qml.MottonenStatePreparation.compute_decomposition(data, wires=[0, 1])
[RY(array([3.14159265, 3.14159265]), wires=[0]),
CNOT(wires=[0, 1]),
CNOT(wires=[0, 1])]
Not all PennyLane operators have broadcasting support yet, and unfortunately qml.MottonenStatePreparation
is one of those. That said, when you provide a qml.StatePrep
operator as the first operator to a circuit on default.qubit
, because it's a simulator, it won't compute a decomposition and it will just set the state to the desired input! You can replace qml.MottonenStatePreparation
with qml.StatePrep
in your circuit, while we work on enhancing MottonenStatePreparation
with broadcasting support. Let me know if that works for you!
Using qml.StatePrep
instead does the job. Thanks for the tip, @timmysilv !
fantastic, glad to hear it 🥳 I'm going to close this issue, and we'll track the addition of broadcasting support for MottonenStatePreparation
in the issue linked above
Expected behavior
The created Pytorch model uses a quantum layer with MottonenStatePreparation that is suppodsed to be optimized with the provided data.
Actual behavior
The optimization fails. Apparently, the _get_alpha_y and _get_alpha_z methods from MottonenStatePreparation use qml.math.take with axis=-1, while the torch implementation of qml.math.take seems to only be able to use non-negative axis indices. Negative indices produce the same result as axis=0.
Additional information
No response
Source code
Tracebacks
System information
Existing GitHub issues