qiskit-advocate / qamp-fall-22

Qiskit advocate mentorship program (QAMP) fall 22 cohort (Sep - Dec 2022)
15 stars 7 forks source link

tensor networks for QML #28

Closed MaldoAlberto closed 1 year ago

MaldoAlberto commented 2 years ago

Description

To design in a general way the quantum circuit to reproduce the tensor networks MERA, TTN, MPS for n qubits, and be able to apply to different datasets from Iris dataset to an unbalanced dataset that has missing values.

Deliverables

tutorials and possibly a journal paper.

Mentors details

Number of mentees

1

Type of mentees

bopardikarsoham commented 2 years ago

Hello @MaldoAlberto, this issue fits my area of interest would love to chat further about this project

Gopal-Dahale commented 2 years ago

Will these be added to Qiskit's circuit library in N-local circuits?

hykavitha commented 2 years ago

@MaldoAlberto @HuangJunye : I would like to be a mentee in this one. I do QML at my day job, I guess this fits my interests too.

MaldoAlberto commented 2 years ago

@Gopal-Dahale , yes that is the idea, is do a Issue about this implementation like you said :D

MaldoAlberto commented 2 years ago

@hykavitha yeah you can submit your proposal about this project :D

hykavitha commented 2 years ago

@MaldoAlberto : will drop from this & focus on #35

AntonSimen06 commented 2 years ago

@GemmaDawson

Gopal-Dahale commented 2 years ago

Slides for checkpoint 1.

AntonSimen06 commented 2 years ago

Checkpoint 2: Progress

Tutorial

A tutorial on how to use Qiskit to design tensor networks quantum circuits and apply them, as meta-ansatz, in supervised learning quantum models for multi-classification is being written. So far, explanations have been made on how to represent Matrix Product State (MPS) and Tensor Tree Networks (TTN) as quantum circuits and the application phase of these circuits in a kernel-based quantum classifier with the Iris data set is starting. The idea is to benchmark in terms of performance and quantum hardware resources on how tensor networks quantum circuits perform in multi-classification tasks compared to the variational forms usually applied.

Benchmark

Benchmarked TTN and MPS ansatzes with MNIST binary (0 and 1) dataset. We used 3 different feature maps: Amplitude (our implementation), Qiskit's raw feature vector and Angle encoding ($R_x$ gate). COBYLA, SPSA and L_BFGS_B were used as optimizers. Data preprocessing involved using PCA to 4 or 16 components followed by normalization. All the simulations were done on GPU with cuquantum. We used wandb for experiment management and the results can be found here.

     ┌──────────┐            ┌───┐            ┌───┐                             ┌───┐                             ┌───┐                                                                ┌───┐             »
q_0: ┤ Ry(x[0]) ├─────■──────┤ X ├─────■──────┤ X ├─────■────────────────■──────┤ X ├─────■────────────────■──────┤ X ├─────■────────────────■────────────────■─────────────────■──────┤ X ├──────■──────»
     └──────────┘┌────┴─────┐└───┘┌────┴─────┐└───┘     │      ┌───┐     │      ├───┤     │      ┌───┐     │      ├───┤     │                │      ┌───┐     │                 │      ├───┤      │      »
q_1: ────────────┤ Ry(x[1]) ├─────┤ Ry(x[2]) ├──────────■──────┤ X ├─────■──────┤ X ├─────■──────┤ X ├─────■──────┤ X ├─────■────────────────■──────┤ X ├─────■─────────────────■──────┤ X ├──────■──────»
                 └──────────┘     └──────────┘     ┌────┴─────┐└───┘┌────┴─────┐└───┘┌────┴─────┐└───┘┌────┴─────┐└───┘     │      ┌───┐     │      ├───┤     │      ┌───┐      │      ├───┤      │      »
q_2: ──────────────────────────────────────────────┤ Ry(x[3]) ├─────┤ Ry(x[4]) ├─────┤ Ry(x[5]) ├─────┤ Ry(x[6]) ├──────────■──────┤ X ├─────■──────┤ X ├─────■──────┤ X ├──────■──────┤ X ├──────■──────»
                                                   └──────────┘     └──────────┘     └──────────┘     └──────────┘     ┌────┴─────┐└───┘┌────┴─────┐└───┘┌────┴─────┐└───┘┌─────┴─────┐└───┘┌─────┴─────┐»
q_3: ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ Ry(x[7]) ├─────┤ Ry(x[8]) ├─────┤ Ry(x[9]) ├─────┤ Ry(x[10]) ├─────┤ Ry(x[11]) ├»
                                                                                                                       └──────────┘     └──────────┘     └──────────┘     └───────────┘     └───────────┘»
«                                                              ┌───┐    ┌──────────┐
«q_0: ───────────■─────────────────■─────────────────■─────────┤ X ├────┤ Ry(θ[0]) ├──■────────────────────────────────────
«                │      ┌───┐      │                 │         ├───┤    ├──────────┤┌─┴─┐┌──────────┐
«q_1: ───────────■──────┤ X ├──────■─────────────────■─────────┤ X ├────┤ Ry(θ[1]) ├┤ X ├┤ Ry(θ[2]) ├──■───────────────────
«     ┌───┐      │      ├───┤      │      ┌───┐      │         ├───┤    ├──────────┤└───┘└──────────┘┌─┴─┐┌──────────┐
«q_2: ┤ X ├──────■──────┤ X ├──────■──────┤ X ├──────■─────────┤ X ├────┤ Ry(θ[3]) ├─────────────────┤ X ├┤ Ry(θ[4]) ├──■──
«     └───┘┌─────┴─────┐└───┘┌─────┴─────┐└───┘┌─────┴─────┐┌──┴───┴───┐└──────────┘                 └───┘└──────────┘┌─┴─┐
«q_3: ─────┤ Ry(x[12]) ├─────┤ Ry(x[13]) ├─────┤ Ry(x[14]) ├┤ Ry(θ[5]) ├──────────────────────────────────────────────┤ X ├
«          └───────────┘     └───────────┘     └───────────┘└──────────┘                                              └───┘

The above figure uses amplitude encoding with MPS ansatz.

       ┌──────────┐ ┌──────────┐
 q_0: ─┤ Rx(x[0]) ├─┤ Ry(θ[0]) ├──■────────────────────────────────────────────────────────
       ├──────────┤ ├──────────┤┌─┴─┐┌───────────┐
 q_1: ─┤ Rx(x[1]) ├─┤ Ry(θ[1]) ├┤ X ├┤ Ry(θ[16]) ├──■──────────────────────────────────────
       ├──────────┤ ├──────────┤└───┘└───────────┘  │
 q_2: ─┤ Rx(x[2]) ├─┤ Ry(θ[2]) ├──■─────────────────┼──────────────────────────────────────
       ├──────────┤ ├──────────┤┌─┴─┐┌───────────┐┌─┴─┐┌───────────┐
 q_3: ─┤ Rx(x[3]) ├─┤ Ry(θ[3]) ├┤ X ├┤ Ry(θ[17]) ├┤ X ├┤ Ry(θ[24]) ├──■────────────────────
       ├──────────┤ ├──────────┤└───┘└───────────┘└───┘└───────────┘  │
 q_4: ─┤ Rx(x[4]) ├─┤ Ry(θ[4]) ├──■───────────────────────────────────┼────────────────────
       ├──────────┤ ├──────────┤┌─┴─┐┌───────────┐                    │
 q_5: ─┤ Rx(x[5]) ├─┤ Ry(θ[5]) ├┤ X ├┤ Ry(θ[18]) ├──■─────────────────┼────────────────────
       ├──────────┤ ├──────────┤└───┘└───────────┘  │                 │
 q_6: ─┤ Rx(x[6]) ├─┤ Ry(θ[6]) ├──■─────────────────┼─────────────────┼────────────────────
       ├──────────┤ ├──────────┤┌─┴─┐┌───────────┐┌─┴─┐┌───────────┐┌─┴─┐┌───────────┐
 q_7: ─┤ Rx(x[7]) ├─┤ Ry(θ[7]) ├┤ X ├┤ Ry(θ[19]) ├┤ X ├┤ Ry(θ[25]) ├┤ X ├┤ Ry(θ[28]) ├──■──
       ├──────────┤ ├──────────┤└───┘└───────────┘└───┘└───────────┘└───┘└───────────┘  │
 q_8: ─┤ Rx(x[8]) ├─┤ Ry(θ[8]) ├──■─────────────────────────────────────────────────────┼──
       ├──────────┤ ├──────────┤┌─┴─┐┌───────────┐                                      │
 q_9: ─┤ Rx(x[9]) ├─┤ Ry(θ[9]) ├┤ X ├┤ Ry(θ[20]) ├──■───────────────────────────────────┼──
      ┌┴──────────┤┌┴──────────┤└───┘└───────────┘  │                                   │
q_10: ┤ Rx(x[10]) ├┤ Ry(θ[10]) ├──■─────────────────┼───────────────────────────────────┼──
      ├───────────┤├───────────┤┌─┴─┐┌───────────┐┌─┴─┐┌───────────┐                    │
q_11: ┤ Rx(x[11]) ├┤ Ry(θ[11]) ├┤ X ├┤ Ry(θ[21]) ├┤ X ├┤ Ry(θ[26]) ├──■─────────────────┼──
      ├───────────┤├───────────┤└───┘└───────────┘└───┘└───────────┘  │                 │
q_12: ┤ Rx(x[12]) ├┤ Ry(θ[12]) ├──■───────────────────────────────────┼─────────────────┼──
      ├───────────┤├───────────┤┌─┴─┐┌───────────┐                    │                 │
q_13: ┤ Rx(x[13]) ├┤ Ry(θ[13]) ├┤ X ├┤ Ry(θ[22]) ├──■─────────────────┼─────────────────┼──
      ├───────────┤├───────────┤└───┘└───────────┘  │                 │                 │
q_14: ┤ Rx(x[14]) ├┤ Ry(θ[14]) ├──■─────────────────┼─────────────────┼─────────────────┼──
      ├───────────┤├───────────┤┌─┴─┐┌───────────┐┌─┴─┐┌───────────┐┌─┴─┐┌───────────┐┌─┴─┐
q_15: ┤ Rx(x[15]) ├┤ Ry(θ[15]) ├┤ X ├┤ Ry(θ[23]) ├┤ X ├┤ Ry(θ[27]) ├┤ X ├┤ Ry(θ[29]) ├┤ X ├
      └───────────┘└───────────┘└───┘└───────────┘└───┘└───────────┘└───┘└───────────┘└───┘

The above figure uses angle encoding with TTN ansatz. We achieved a test accuracy of 93.7 with a training accuracy of 95.0 with amplitude encoding and SPSA optimizer for TTN. Similar results were obtained with MPS. A general trend is that amplitude encoding gives the best train and test scores (90-95), followed by Qiskit's raw feature vector (70-80) and finally angle encoding (50-70). Angle encoding performance can be tuned by changing the encoding gates ($R_y$ or $R_z$). TTN and MPS both perform equally well. We did not find any noticeable change in the test accuracy due to the use of different optimizers. The runtime values for COBYLA are the lowest followed by SPSA and then L_BFGS_B.

Visual representation

Data

Iris data visualization image

Quantum feature map and TTN quantum circuit

A full quantum circuit with iris feature embedding embedding and TTN quantum circuit. image

Quantum feature map and MPS quantum circuit

A full quantum circuit with iris feature embedding embedding and MPS quantum circuit. image

Plots from wandb

image

These results give us a direction on what settings to use for upcoming training. The next focus will be to try different datasets and hybrid models. So far the Hybrid models that have been tested by us and not performing satisfiable and we aim to achieve greater accuracy with them.

GemmaDawson commented 2 years ago

@AntonSimen06, please do not forget to add a visual/image

AntonSimen06 commented 2 years ago

We are on it. Thanks!

Gopal-Dahale commented 1 year ago

Slides for checkpoint 3.

GemmaDawson commented 1 year ago

Congratulations on completing all the requirements for QAMP Fall 2022!! 🌟🌟🌟