XanaduAI / QHack2021

Official repo for QHack—the quantum machine learning hackathon
https://qhack.ai
121 stars 94 forks source link

[Power Up] Quantum enhanced convolutional filter #42

Closed RicardoGaGu closed 3 years ago

RicardoGaGu commented 3 years ago

Team Name:

CCH

Project Description:

The emerging field of hybrid quantum-classical algorithms joins CPUs and QPUs to speed-up/improve specific calculations within a classical algorithm. This allows for shorter quantum executions that are less susceptible to the cumulative effects of noise and that run well on today’s devices. This is why we intend to explore the performance of a hybrid convolutional neural network model that incorporates a trainable quantum layer, effectively replacing a convolutional filter, in both quantum simulators and QPU.

Our team proposes to design a trainable quantum convolutional filter in a quantum-classical hybrid neural network, appealing for the NISQ era, inspired by these papers: Hybrid quantum-classical Convolutional Neural Networks [1] and Quanvolutional Neural Networks [2] , but generalizing these previous works to use cloud based QPU.

Here is a list of the expected outcomes/ questions to address of this project:

Source code:

https://github.com/KetpuntoG/QFilters/blob/main/Qfilter4_enhanced%20(1).ipynb

Resource Estimate:

There are a few bottlenecks in the quantum classical hybrid models to explore (number of learnable parameters in ansatz related to depth of quantum circuits, number of convolutions will increase as image size increases as well). The quantum filters will need qubits registers of size in the range of 9 to 30 qubits (equivalent to NxN kernel window), 3x3 and 5x5 are typical sizes in CNN. But mainly, there will be shallow quantum circuits executed on both simulator and hardware backends (LocalSimulator and Rigetti QPU should be good enough), with a reasonable number of shots, many quantum computations will be performed during training if the number of epochs and dataset size is large. For performing the multiple translations of the kernel around the image, we expect to parallelize this workload on Amazon bracket during the training phase, to speed it up. Another aspect is to keep the classical layers not too deep to allow for efficient classical training. We also aim to run multiple benchmarks such as exploring the trade-off of number of run epochs and accuracy, the complexity/expressive power of the ansatz and the accuracy, number of quantum vs classical parameters and a time complexity benchmark of the hybrid training loop time.

References

[1] https://arxiv.org/abs/1911.02998 [2] https://arxiv.org/abs/1904.04767

glassnotes commented 3 years ago

Hi @RicardoGaGu , thanks for the submission!

co9olguy commented 3 years ago

Hi @RicardoGaGu, can you confirm that your team name is "CCH" as listed on the QML Challenges scoreboard? I see there is also a team there named "|CCH>". We want to make sure we associate the correct email address with this submission

RicardoGaGu commented 3 years ago

Yes I confirm it! We are CCH. We are actually two different teams, although we know each other. We come from the same quantum spanish community.

co9olguy commented 3 years ago

Thanks for your Power Up Submission @RicardoGaGu !

To help us keep track of final submissions, we will be closing all of the [Power Up] issues. We ask you to open a new issue for your final submission. Please use this pre-formatted [Entry] Issue template. Note that for the final submission, the Resource Estimate requirement is replaced by a Presentation item.