FelixXu35 / hamiltoniq

A bechmarking toolkit desgined for QAOA performance on real quantum hardwares.
MIT License
9 stars 4 forks source link

License: MIT DOI

HamilToniQ_logoHamilToniQ: An Open-Source Benchmark Toolkit for Quantum Computers

Table of Contents:

  1. Introduction
  2. Quick Start
  3. H-Scores
  4. Architecture
  5. How to cite

Introduction

HamilToniQ is an application-oriented benchmarking toolkit for the comprehensive evaluation of QPUs.

Highlighted features include:

Quick Start

Installation

Install the HamilToniQ toolkit by running the following code in the Terminal.

cd /path/to/your/directory
git clone https://github.com/FelixXu35/hamiltoniq.git
cd hamiltoniq
pip install -e .

Bechmark a backend

Simply copy and run the following Python code:

from hamiltoniq.bechmark import Toniq

toniq = Toniq()
backend = <your_backend>
n_qubits = <your_prefered_number_of_qubits>
n_layers = <your_prefered_number_of_layers>
n_cores = <number_of_cores_in_your_PC>

score = toniq.simulator_run(backend=backend, n_qubits=n_qubits, n_layers=n_layers, n_cores=n_cores)

An example is given in this notebook.

H-Scores

The following results were obtained on the built-in Q matrices.

Note that comparison across different numbers of qubits is meaningless.

3 qubits

n_qubits=3

4 qubits

n_qubits=4

5 qubits

n_qubits=4

6 qubits

n_qubits=4

Architecture

For more technical details please visit our arXiv Paper.

The HamilToniQ’s benchmarking workflow, shown in the figure below, commences with the characterization of QPUs, where each QPU is classified according to its type, topology, and multi-QPU system. This initial step ensures a tailored approach to the benchmarking process, considering the unique attributes of each QPU. Subsequently, the process engages in quantum circuit compilation, employing a specific strategy designed to optimize the execution of quantum circuits on the identified QPUs. Integral to the workflow is Quantum Error Mitigation (QEM), which strategically addresses computational noise and errors that could affect the fidelity of the quantum processes. The culmination of this rigorous workflow is the benchmarking result, which quantifies the performance of the QPU in terms of reliability—represented by the H-Score and Execution Time. These metrics provide a quantitative and objective measure of the QPU’s performance, reflecting the effectiveness of the benchmarking process implemented by HamilToniQ. Additionally, the H-score can help manage computational resources in a Quantum-HPC system.

scheme

HamilToniQ primarily comprises two components: the reference part, also known as ground truth, and the scoring part, as depicted in the figure below. The reference part, which is optional, utilizes a noiseless simulator to find the scoring curve. Users will only need this part when they are benchmarking on their own Q matrices. In the scoring part, the Quantum Approximate Optimization Algorithm (QAOA) is executed a certain number of times, and the scoring curve is used to determine the score of each iteration based on its accuracy. The final H-Score is computed as the average of all individual scores.

flow

How to cite

If you used this package or framework for your research, please cite:

@article{xu2024hamiltoniq,
  title={HamilToniQ: An Open-Source Benchmark Toolkit for Quantum Computers},
  author={Xu, Xiaotian and Chen, Kuan-Cheng and Wille, Robert},
  journal={arXiv preprint arXiv:2404.13971},
  year={2024}
}