XanaduAI / QHack2021

Official repo for QHack—the quantum machine learning hackathon
https://qhack.ai
121 stars 94 forks source link

[ENTRY][Floq] Performance Evaluation of Hybrid Quantum-Classical Object Detection Network #80

Open RKHashmani opened 3 years ago

RKHashmani commented 3 years ago

Team Name:

QuantumTunnelers

Project Description:

Our project aims to create a hybrid model of popular object detection networks. Primarily, we are focusing on RetinaNet with a MobileNet (and possibly ResNet-18) feature extraction backbone. Our goal is to introduce quantum layers and measure various performance statistics such as mean Average Precision (mAP) and the number of epochs taken to reach a comparable Loss value.

The main layer we are focusing on is the convolutional layer. Using a modification of both the original quanvolutional layer model introduced in Henderson et al. (2019) and the demo found on PennyLane, we custom built a quantum convolutional layer that takes in any kernel size and output layer depth as parameters, automatically determines the correct number of qubits needed, and outputs the appropriate feature map using a quantum circuit as its base.

We plan to replace key convolutional layers within RetinaNet with our custom quanvolutional layer and measure the aforementioned performance statistics. We hope to see improvement within the statistics and hope to extend this project to other popular networks after this Hackathon.

Updates

Our initial plan was to modify MobileNetV2 to create a balance between accuracy and the number of qubits required, using Floq as a way to speed up the quanvolution if the number of qubits were within the Floq's allowed range. This proved to be more difficult than anticipated to accomplish within 2 days, seeing as how a number of hyperparameters (kernel sizes, feature map input and output depth for convolutions, etc) had to be modified to find a good balance. Unfortunately, due to sudden personal delays, we were unable to devote most of the last 2 days to this project.

However, we managed to create a working MobileNetV2-based Hybrid Quantum-Classical feature extraction backbone with an easy-to-use support for Floq. Our limited time due to the personal delays prevented us from training and evaluating this hybrid model. Instead, we decided to focus on our own MNIST-focused feature extractor (QuanvNet) and applied a classification head on top (2 fully-connected layers).

We tweaked our quanvolutional layer code to fix a bug where quanvolution layers using a quantum circuit with a single layer, dubbed quantum-1, were outperforming similar layers using a quantum circuit with double layers, dubbed quantum-2. In addition, we added support for the automatic use of Floq if an API-key is sent as an argument.

Future

We fully intend to continue working on this project and hope to create a hybrid version of an established backbone in order to compare with literature. We also intend to continue to modify our version of the quanvolutional layer in order to better understand what type of quantum circuit would lead to the most improvement over the classical version. If you are interested in working with us, please do not hesitate to reach out, and for more information, please visit our GitHub repository.

Thank you very much!

Presentation:

Please visit our GitHub repository for more details about our project, validation results, and instructions on how to run our code.

Source code:

Our GitHub Repository: QuobileNet

co9olguy commented 3 years ago

Thanks for the submission! We hope you have enjoyed participating in QHack :smiley:

We will be assessing the entries and contacting the winners separately. Winners will be publicly announced sometime in the next month.

We will also be freezing the GitHub repo as we sort through the submitted projects, so you will not be able to update this submission.