PennyLaneAI / pennylane

PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
https://pennylane.ai
Apache License 2.0
2.3k stars 591 forks source link

Add broadcasting support for MottonenStatePreparation #4460

Open timmysilv opened 1 year ago

timmysilv commented 1 year ago

Feature details

As title says. This will probably not be trivial since the decompositions would be different for each state in a batch, and they may not be able to all fit on one tape (ie. not true broadcasting).

Implementation

No response

How important would you say this feature is?

2: Somewhat important. Needed this quarter.

Additional information

I think this should be added because we're pushing to allow StatePrepBase ops (eg. StatePrep) to be allowed mid-circuit, except that they are decomposed when found mid-circuit. This is usually fine because they have decompositions defined, like how StatePrep (or even AmplitudeEmbedding which decomposes to a StatePrep) decomposes to MottonenStatePreparation. Those operators claim to support batching, but MottonenStatePreparation cannot, so they actually only support batching at the beginning of a circuit. This can cause unexpected behaviour.

Eg.

>>> op = qml.MottonenStatePreparation(np.array([[1, 0], [0, 1]]), wires=[0])
>>> op.decomposition()
[]

If an AmplitudeEmbedding is placed mid-circuit with a batch dimension, it will silently have no effect.

daehiff commented 9 months ago

I stumbled upon this issue in a private project. It seems like the main issue was that in numerator = qml.math.take(a, indices=indices_numerator, axis=-1) the axis was falsely specified for the PyTorch interface.

The second undefined behaviour (mentioned in this issue):

>>> op = qml.MottonenStatePreparation(np.array([[1, 0], [0, 1]]), wires=[0])
>>> op.decomposition()
[]

This could be resolved by applying the gate in (_apply_uniform_rotation_dagger) if any of the elements in the current batch is nonzero. (See: https://github.com/daehiff/pennylane/blob/master/pennylane/templates/state_preparations/mottonen.py#L126).

At the current version the section of this method is:

    if gray_code_rank == 0:
        if qml.math.is_abstract(theta) or qml.math.all(theta[..., 0] != 0.0):
            op_list.append(gate(theta[..., 0], wires=[target_wire]))
        return op_list

I changed it to:

    if gray_code_rank == 0:
        if qml.math.is_abstract(theta) or qml.math.any(theta[..., 0] != 0.0):
            op_list.append(gate(theta[..., 0], wires=[target_wire]))
        return op_list

This way the example above would result in:

>>> op = qml.MottonenStatePreparation(np.array([[1, 0], [0, 1]]), wires=[0])
>>> op.decomposition()
[RY([0.        , 3.14159265], wires=[0])]

Which is what I assumed as expected.

Are you interested in a PR or is this resolved since you don't expect batched input for MottonenStatePreparation?

timmysilv commented 9 months ago

Hi @daehiff, thanks for taking a look at this! I think I left out some key context in the issue description (originally noted here) for what makes this a tricky task to complete. Quick recap of the issue: when broadcasting, the decomposition might be clumsy to make work without significant changes because different states will need decompositions of varying lengths and operator types. For example:

>>> qml.MottonenStatePreparation(np.array([0, 0, 0, 1]), wires=[0, 1]).decomposition()
[RY(array(3.14159265), wires=[0]), RY(array(1.57079633), wires=[1]), CNOT(wires=[0, 1]), RY(array(-1.57079633), wires=[1]), CNOT(wires=[0, 1])]
>>> qml.MottonenStatePreparation(np.array([0, 0, 1, 0]), wires=[0, 1]).decomposition()
[RY(array(3.14159265), wires=[0]), CNOT(wires=[0, 1]), CNOT(wires=[0, 1])]

On inspection, you can see that this could be reduced to something broadcast-able, but making it work generally will need more complicated changes. Would you like to take on this issue?

daehiff commented 9 months ago

Hi @timmysilv !

Thanks for your reply and ah yes, I see the problem. I will attempt to solve it!

Would a viable solution be to insert "identity gates" (i.e R_y(0)-Rotations?

timmysilv commented 9 months ago

It could work, but I'm honestly not sure - I'll leave some of the investigating up to you! Here's where I'd start: MottonenStatePreparation.compute_decomposition calls _apply_uniform_rotation_dagger, and it only seems to create variable-length decompositions because of this if-condition. I suspect that deleting this if-condition will suffice, but it would take some testing to prove the correctness. Hope that helps!

dwierichs commented 4 months ago

@daehiff I agree with your solution, I believe that the qml.math.all essentially is an undiscovered typo because we chose to not support broadcasted decomposition until now. There is an additional complication though: The decomposition includes a GlobalPhase in general, which does not support broadcasting yet :grimacing: So a prerequisite would be to implement broadcasting for GlobalPhase.