PennyLaneAI / pennylane

PennyLane is a cross-platform Python library for quantum computing, quantum machine learning, and quantum chemistry. Train a quantum computer the same way as a neural network.
https://pennylane.ai
Apache License 2.0
2.28k stars 586 forks source link

Ability to switch off split_non_commuting if user knows that all observables commute #4664

Open cvjjm opened 11 months ago

cvjjm commented 11 months ago

Feature details

When measuring many observables that I as the user know are jointly measurable (such as a whole bunch of tensor products of PauliZ operators) PL sill tries to split and group those observables. This can easily take an order of magnitude more time than the actual simulation of the circuit on the device as the following instrument trace shows.

Please add a kwarg to qml.qnode() that lets me switch off the automatic splitting and grouping. If a users specifies this but provides incompatible observable, then PL should simply raise an exception.

   │  │        │                 ├─ 24.047 QNode.__call__  pennylane/qnode.py:589
   │  │        │                 │  └─ 23.797 execute  pennylane/interfaces/execution.py:222
   │  │        │                 │     ├─ 21.543 map_batch_transform  pennylane/transforms/batch_transform.py:422
   │  │        │                 │     │  └─ 21.543 CovestroQubit.batch_transform  pennylane/_device.py:677
   │  │        │                 │     │     └─ 21.520 batch_transform.__call__  pennylane/transforms/batch_transform.py:317
   │  │        │                 │     │        └─ 21.520 <lambda>  pennylane/transforms/batch_transform.py:419
   │  │        │                 │     │           └─ 21.520 batch_transform.construct  pennylane/transforms/batch_transform.py:386
   │  │        │                 │     │              └─ 21.517 split_non_commuting  pennylane/transforms/split_non_commuting.py:24
   │  │        │                 │     │                 └─ 21.512 group_observables  pennylane/grouping/group_observables.py:180
   │  │        │                 │     │                    └─ 21.148 PauliGroupingStrategy.colour_pauli_graph  pennylane/grouping/group_observables.py:158
   │  │        │                 │     │                       ├─ 18.746 PauliGroupingStrategy.complement_adj_matrix_for_operator  pennylane/grouping/group_observables.py:118
   │  │        │                 │     │                       │  └─ 18.591 qwc_complement_adj_matrix  pennylane/grouping/utils.py:742
   │  │        │                 │     │                       │     ├─ 17.490 is_qwc  pennylane/grouping/utils.py:585
   │  │        │                 │     │                       │     │  ├─ 8.391 array_equal  <__array_function__ internals>:2
   │  │        │                 │     │                       │     │  │     [16 frames hidden]  <__array_function__ internals>, numpy...
   │  │        │                 │     │                       │     │  ├─ 7.739 [self]  None
   │  │        │                 │     │                       │     │  └─ 0.917 ndarray.astype  None
   │  │        │                 │     │                       │     │        [2 frames hidden]  <built-in>
   │  │        │                 │     │                       │     └─ 1.096 [self]  None
   │  │        │                 │     │                       ├─ 1.698 recursive_largest_first  pennylane/grouping/graph_colouring.py:83
   │  │        │                 │     │                       │  └─ 1.545 n_0  pennylane/grouping/graph_colouring.py:110
   │  │        │                 │     │                       │     └─ 1.238 [self]  None
   │  │        │                 │     │                       └─ 0.703 <listcomp>  pennylane/grouping/group_observables.py:172
   │  │        │                 │     │                          └─ 0.703 <listcomp>  pennylane/grouping/group_observables.py:173
   │  │        │                 │     │                             └─ 0.702 binary_to_pauli  pennylane/grouping/utils.py:261
   │  │        │                 │     │                                └─ 0.494 PauliZ.__init__  pennylane/operation.py:1518
   │  │        │                 │     │                                   └─ 0.404 [self]  None
   │  │        │                 │     └─ 2.253 execute  pennylane/interfaces/autograd.py:26

Implementation

No response

How important would you say this feature is?

3: Very important! Blocking work.

Additional information

No response

albi3ro commented 11 months ago

Thanks for opening the issue @cvjjm . Which device are you using?

Calling split_non_commuting is the responsibility of the device.

Our new default qubit, accessible from master as qml.device('default.qubit') and accessible on the latest release as qml.devices.experimental.DefaultQubit(), can simultaneously handle any number of commuting and non-commuting measurements and does not call split_non_commuting.

If you are using your own device, you can override Device.batch_transform to avoid calling split_non_commuting. If you are using a different device, you may be able to inherit from it and override the Device.batch_transform method as well.

class ModifiedDevice(OtherDevice):

    def batch_transform(self, circuit: QuantumTape):
        def null_postprocessing(results):
            return results[0]
        return (circuit, ) null_postprocessing
cvjjm commented 11 months ago

Ups,... due to a glitch in my environment setup the above trace was taken with a very old PL 0.25.1... The timings look better with 0.29.1 (which is the highest version currently compatible with my code) but still, I think it would be very useful to expose the device_batch_transform kwarg that execute() already has via the qnode interface in a similar way as max_expansion is also accessible from the qnode level.

albi3ro commented 11 months ago

If solving the problem with inheritance doesn't work, I can also recommend using composition and a device wrapper:

class SkipBatchTransformDevice:

    def __new__(cls, device):
        cls_type = type("SkipBatchTransformDevice", (SkipBatchTransformDevice, device.__class__, ), device.__dict__)
        return object.__new__(cls_type)

    def __init__(self, device):
        self._device = device

    def __getattr__(self, name):
        return getattr(self._device, name)

    def batch_transform(self, circuit):
        def null_postprocesssing(results):
            return results[0]
        return (circuit, ),null_postprocesssing

no_batch_transform_device = SkipBatchTransformDevice(original_device)
trbromley commented 11 months ago

@cvjjm are you hoping to do this for a specific device?

cvjjm commented 11 months ago

the workaround proposed by @albi3ro should do the trick, but I just think it would be really useful (not just for me) if one could switch of the batch_transform of the device on the qnode level in the same way as one can currently already control other aspects of "compilation" such as the expansion_strategy. The device_batch_transform kwarg of execute already exists. It should be essentially a one line fix to expose this feature via qml.qnode() and simply pass the value on when the Qnode calls execute().

trbromley commented 11 months ago

It should be essentially a one line fix to expose this feature via qml.qnode() and simply pass the value on when the Qnode calls execute().

Yes agreed. Our worry is that odd behaviour might occur by allowing users to disable the device's batch transform. But perhaps this is ok for power users who know what they are doing!