Closed therooler closed 3 years ago
@therooler thank you for your submission!
Thanks for your Power Up Submission @therooler !
To help us keep track of final submissions, we will be closing all of the [Power Up] issues. We ask you to open a new issue for your final submission. Please use this pre-formatted [Entry] Issue template. Note that for the final submission, the Resource Estimate requirement is replaced by a Presentation item.
Team Name:
Many body system
Project Description:
Training a quantum classifier on a classical data set is no easy feat, and one of the key issues for getting good performance is choosing a suitable ansatz that fits the problem of interest well. Due to the absence of practical guidelines for circuit design, we aim to leverage the full software stack available to the QML practitioner to find the optimal circuit architecture by a tree-based metaheuristic. Inspired by an analogous approach in classical machine learning [1], we propose Quantum Optimal Subarchitecture Estimation, or QOSE.
By combining the the scalability of AWS with the versatility and performance of Xanadu's PennyLane, we can reliably train hundreds of different circuit architectures in parallel. In QOSE, we iteratively add layers to a simple base circuit, increasing the expressiveness as the circuit deepens. This construction process can be described by a directed graph in the form of a tree, where nodes correspond to circuits of depth d. For each node in the tree, we briefly train the corresponding circuit on a subset of the actual data to get a glimpse of the expected performance.
To combat the exponential increase of the search space of possible circuit architectures, we use a tree-pruning algorithm that eliminates nodes based on each circuit's quality. This quality is determined by a metric that combines inference speed, accuracy and model size into a single scalar cost. Our code can be made embarrassingly parallel by evaluating the cost of each depth-d node concurrently.
Source code:
https://github.com/kmz4/QHACK2021/tree/main/src
Resource Estimate:
We plan to deploy our code on AWS in the upcoming days. The circuits we plan to run will be small, hence we will only require the localsimulator keeping costs reasonable.
We propose to use MPI4PY to handle the parallelization on AWS for calculating individual circuits. Due to the embarassingly parallel nature of our problem we can create a massive speedup for calculating the W costs used to direct the tree search.
[1] Adrian de Wynter, An Approximation Algorithm for Optimal Subarchitecture Extraction, 2020. eprint:arXiv:2010.08512.