yardstiq / quantum-benchmarks

benchmarking quantum circuit emulators for your daily research usage
Other
119 stars 28 forks source link

Update PennyLane benchmarks #46

Closed antalszava closed 2 years ago

antalszava commented 2 years ago

Hi @Roger-luo, I've created a PR that updates how the Pennylane benchmarks are performed.

It changes two major parts:

1. The way how the benchmarks are performed

As it was discussed in https://github.com/yardstiq/quantum-benchmarks/pull/7, the way how PennyLane works is different to other quantum computing frameworks.

In PennyLane, the most common user interface involves both:

a) executing a quantum circuit on a device and b) computing an expectation value (or some other quantity).

To make a fair comparison with other quantum computing frameworks, an error was added in #7 in an attempt to restrict the PennyLane execution pipeline to the device execution step. This approach, however, uses an outdated device API (the pre_measure method) and affects the benchmarking results.

The correct, up-to-date approach is to call the apply method of the device: this method applies the quantum operations defined in the circuit using the statevector of the device.

import pennylane as qml

dev = qml.device('lightning.qubit', wires=1)

print("Statevector before applying any operations: ", dev.state)

operations = [qml.Hadamard(0)]
dev.apply(operations)

print("Statevector after applying the Hadamard gate: ", dev.state)
Statevector before applying any operations:  [1.+0.j 0.+0.j]
Statevector after applying the Hadamard gate:  [0.70710678+0.j 0.70710678+0.j]

2. The device used

A more performant device called lightning.qubit has been included in the PennyLane ecosystem and is available to users when installing PennyLane >v0.18.0. The benchmarks were updated such that they use this device.

By issuing pip install pennylane, the device is available via qml.device("lightning.qubit", wires=number_of_qubits).


Let me know if any further changes or details would be required to have these changes in. Thank you!

Roger-luo commented 2 years ago

Thanks for the PR! and sorry for the late reply - I'm travelling recently until 25th, then I'll be able to run this PR on the machine to update the results too (also test it)

antalszava commented 2 years ago

Sure thing! Thank you :slightly_smiling_face:

antalszava commented 2 years ago

Hi @Roger-luo :slightly_smiling_face: Just checking in if you'd have the chance to update the results based on this PR. Thank you!

Roger-luo commented 2 years ago

sorry for the late reply, I tried to run the script today, but our plot script seems somehow out of date, I'll merge this PR first. then have a look at the runners

antalszava commented 2 years ago

Sounds good, thank you!

antalszava commented 2 years ago

Hi @Roger-luo happy New Year! :slightly_smiling_face: Just checking in if you had any luck with the plotting script such that the plots e.g., in the README.md file could be updated?

Roger-luo commented 2 years ago

Hi @antalszava sorry for not having any progress on this, I'm on a rewrite of the benchmark since the code has not been updated for a while, so I need to upgrade a bunch of things, plus I don't understand much about the AWS service and webpage SK setup previously (which is the main issue, as I need to run things reliably in parallel). I think this would take a while given that I need to work on two new open-source software projects at the moment. But I'll try my best to get this benchmark code alive again.

antalszava commented 2 years ago

Hi @Roger-luo, thank you!

In parallel to that, could you perhaps add a disclaimer to the README file that indicates that the benchmark results shared there are out of date? This would be informative for those who find the repo just now and check the results.

Roger-luo commented 2 years ago

sure @antalszava , done in https://github.com/yardstiq/quantum-benchmarks/commit/7959238588ab590d9277731630a45863364053c1

antalszava commented 2 years ago

Thank you! :tada: