Open oumjunior opened 3 years ago
Thanks for the submission! We hope you have enjoyed participating in QHack :smiley:
We will be assessing the entries and contacting the winners separately. Winners will be publicly announced sometime in the next month.
We will also be freezing the GitHub repo as we sort through the submitted projects, so you will not be able to update this submission.
Team Name:
QUANTIFY
Project Description:
The diagrammatic approach to quantum computations pioneered in [1,2] has been extended to quantum circuit compilation and optimisation [3]. The latter has been successfully applied for QNLP using NISQ machines [4, 7] instead of using Grover-like, QRAM-based approaches [5]. It has been shown that QAOA methods are approximators of universal computations such as the ones expressed in the ZX calculus. There exists a similarity between the QAOA exponentiated ZZ gates and the parameterised Ry gates and the trained circuits from [4] (“language diagrams into quantum circuits with phase-gates and CNOT-gates”). QNLP as a problem of the closest vector [4, 5] shares some similarities with skip-grams and the word2vec model.
We investigate the applicability QNLP using QAOA (implemented using PennyLane) to verify the theory from [4]. We use a somewhat reverse approach to the one from [4], instead of starting from language diagrams, we start from skip-grams and train context from using windows of two words extracted from sentences.
We generate country songs using trained models. Country songs are good candidates, because these include repeating somewhat straightforward concepts: the corpus includes a lot of redundancy and and many contexts in which the words are appearing.
Our trained model will reflect the original language diagram of the corpus we started from. The semantics is embedded in the trained QAOA weights:: the strengths of the ZZ and Ry gates encode the grammatical relations. The feasibility of our QNLP is tested, for the moment, using [8]. The model can predict with an accuracy of 65% a corpus of 31 words after 200 training rounds (10 minutes) and using only 28 variables. A corpus of 84 words (61 unique) achieves 45% accuracy after 60 min. training. We use Google Colab. Below is a sample song ( song title: She kicks) from the latter experimentation (we added the punctuation).
Presentation:
The project is fairly detailed (in addition, references are provided) in the project description section.
Source code:
https://github.com/oumjunior/Qountry-songs