XanaduAI / QHack2021

Official repo for QHack—the quantum machine learning hackathon
https://qhack.ai
121 stars 94 forks source link

[Power Up] Exploration of the quantum advantage of hybrid quantum-classical neural networks #8

Closed Qming1368 closed 3 years ago

Qming1368 commented 3 years ago

Team Name:

Qming

Project Description:

The goal of this project is to explore and demonstrate the advantage of hybrid quantum-classical neural network (QCNN) over classical models (e.g. convolutional neural networks) by conducting three experiments.

The first experiment can be considered as a warm-up for building hybrid QCNNs. We create a simple hybrid model by integrating a quantum version of binary classifier to a LeNet-like CNN model. We train this model on the MNIST dataset of handwritten digits. Since this task is a binary classification problem, we only select two types of digits (0 and 1) from the MNIST dataset.

In the second experiment, we design a quantum activation function by using parametrized quantum circuits and implement it in a hybrid QCNN model. The architecture of this model is very similar with the one in the first experiment except that the quantum activation function is integrated to the network and the quantum classifier is replaced with the classical softmax classifier (so we perform multi-class classification tasks in this case). In addition, we build another two classical CNN models by simply replacing the quantum activation function in the hybrid QCNN model with the sigmoid and tanh function respectively. After that, we conduct performance analysis of these three activation functions by training their corresponding models on the MNIST dataset and checking the model performances in terms of loss and accuracy curves. We observe from the results that parameterized quantum circuits help avoid vanishing gradient problem and can be considered as a good option for activation function in deep learning models. We also prove this advantage of parameterized quantum circuits by analytic calculation in the context of quantum mechanics which will be shown in the draft slideshow.

The last experiment is inspired by the research demo of quantum transfer learning from pennylane which implement the idea of the literature. Following the idea of dressed quantum circuits in this paper, we build a hybrid model based on the pre-trained network ResNet18 and compare its performance against the classical ResNet18 model over a set of public imaging datasets. In addition to the MNIST and Hymenoptera datasets used in the original paper, we use more public datasets including medical imaging datasets. We demonstrate the quantum advantage of hybrid models and investigate where this advantage comes from. We observe in light of quantum entanglement hybrid quantum models can obtain up to 6% increase in recognition accuracy over classical CNN models.

Source code:

draft_source_code

Note: This is a draft code for the initial entry for the AWS Power-up and will be modified and submitted by the final deadline.

Resource Estimate:

If we won the power-up prize, we would further investigate the performance of hybrid quantum-classical neural networks. We could leverage Amazon Braket’s fully managed simulators to enjoy high-performing experience on faster training, fine-tuning and testing of hybrid models. We could also perform experiments on various types of quantum computers from different quantum hardware providers. We have completed the first and second experiments mentioned in project description. For the last experiment, we have finished model performance comparison tasks on MNIST, Hymenoptera and Brain Tumor datasets and observed the quantum advantage of hybrid models in light of the quantum entanglement. We would like to extend this model evaluation task to datasets in more industrial scenarios (e.g., manufacturing, retail, finance). If we could still obtain promising results of hybrid models on those additional datasets by leveraging Amazon Braket service, it would be more convincing to conclude that hybrid quantum classical algorithms generally improve the performance of classical machine learning models for imaging classification tasks. Then we would like to summarize our work and write a research paper. In particular, we would emphasize our work is done based on Pennylane and Amazon Braket and encourage more people to used them for exploring and building quantum algorithms.

Resource estimation is given as below:

SV1 simulator:

Task1: simulation charge: $1.125 = $0.075 / minute x 15 minutes

Task2: simulation charge: $1.5 = $0.075 / minute x 20 minutes

Aspen-8:

Task3: Three-qubit quantum circuit with three parameters Number of circuit evaluations per iteration (one evaluation for forward pass and 2p evaluations for gradients where p is the number of parameters): 2 x 2 + 1 = 5 Shots charges per iteration: 5 x 1,000 shots x $0.00035 / shot = $1.75 Shots charges per epoch: $1.75 x 112 = $196 Total shot charge: $196 / epoch x 20 epochs = $3920 Task charges: 1 task x $0.30 / task = $0.30

Total charges: $1.125 + $1.5 + $3920 + $0.30 = $3922.925

co9olguy commented 3 years ago

Thanks for the submission @Qming1368!

Please remember to update the "Source code" and "Resource Estimate" sections by Wed Feb 24 at 12pm EST in order to be considered for the Power Ups :muscle:

Qming1368 commented 3 years ago

@co9olguy Thanks for your reminding me!:smiley: I'm working on the "Source code" and "Resource Estimate" sections and will update them by the deadline.

co9olguy commented 3 years ago

Thanks for your Power Up Submission @Qming1368!

To help us keep track of final submissions, we will be closing all of the [Power Up] issues. We ask you to open a new issue for your final submission. Please use this pre-formatted [Entry] Issue template. Note that for the final submission, the Resource Estimate requirement is replaced by a Presentation item.