Open Travis-S-IBM opened 2 years ago
Hi :) I'm interesting by this project. I'm trying to use AI to build strong opponent in my game (QPokemon fight, QNim) by using Grover and/or QAOA.
I'm always willing to find out more algorithm in order to create more optimal automation process, I'm very interesting in R&D and I need to improve myself in the research field.
Revision to qamp_checkpoint.pdf
The speech :
Since first checkpoint our focus was on finishing the first part of our workflow and finding a way to run it on linux Z:
- Quantum kernel flow CLI - Giving a matrix size or data points, generate the circuits, sent them to the runtime and getting from it the result of each circuit as well as telemetry data.
- Catching errors messages from Runtime - To study the limits of the Runtime, we sent some big payload to the Runtime in order to go behind the limit of the Runtime accept, as well as going a too long run from the Runtime. After doing these, we can now catched those issues and register them.
- Telemetry data file - Getting data from Runtime as time, jobid, runtime program, payload size, error messages, ... Generating a data file with all these data and the parameters sent to him.
- Kernel metadata file - Getting the result of each circuits of a single run, from one particular circuit and having a matching of parameters of this run by crossing jobid between the kernel metadata files and the telemetry file.
- Generating data for telemetry and kernel files - Launching multiple time the Quantum kernel flow CLI in order to have data (fun fact: we ran out of GitHub Actions time because of too much use).
- Linux Z - Create a podman image to be able to run our entire workflow on Linux Z system. In order to generate more data and run the matrix completion easier and faster.
More:
- Unit tests - We added more tests in order to have a fully tested program (>80%).
- Runtime study - We started to create plots for studying the data of our telemetry file and study the limit of Qiskit Runtime. And having the notebook files automatically rerun when new data are uploaded (in progress).
- Shared data - In order to works together and to not erase data, we setup an NFS storage and coded a CLI in our program to allows the way to merge data files, avoiding conflict and backup on GitHub.
Remaining works in our TODO list without specific order:
- Matrix completion CLI, not started yet.
- Runtime limitation, continue to study the behaviour of the Runtime.
- Documentation of how our program works and how to use it, the coded are already documented but the endpoint/CLI aren't yet and we need to have an enduser documentation and not only a coding doc.
PDF of checkpoint 2 qamp_checkpoint.pdf
@mickahell Thanks for sharing the project board. Can you upload the final showcase presentations as well?
PDF of checkpoint 3
@HuangJunye ;)
Description
A recent paper on near-term, quantum-enhanced machine learning (of which I am an author) studied a theoretical bottleneck to using quantum kernels (similarity measures) in practice for ML applications involving the generation of data points. Because new data is being created in the application, there is a need to send not only that new data, but also all of the old data, to a quantum system in order to evaluate the kernel.
The paper showed how classical matrix completion can be used to alleviate that need. It also indicated that the amount of old data which needs to be sent relates to a property of the kernel matrix representing all pairwise similarity measures; namely, its rank.
While the paper successfully identified and proposed a solution to this bottleneck, it did not study what it would mean to operationalize the solution in practice. That is, it did not consider the relevant latencies and timescales for a workflow involving both quantum computation of some kernel values and classical computation to fill in the rest.
This QAMP project would study exactly this. We would study how, in practice, this quantum-enhanced ML workflow would work. We would also perform a numerical investigation workflow's timescales. In particular, we would be especially interested to look at the tradeoff between how much classical compute time is needed for the matrix completion versus the total round-trip time for the quantum kernel algorithm.
As part of this project, I hope we would leverage the Qiskit Runtime and any existing programs compatible with it. I do not envision we would write our own Runtime programs.
Paper: Kernel Matrix Completion for Offline Quantum-Enhanced Machine Learning
Deliverables
Mentors details
Number of mentees
2
Type of mentees
Recommended mentee background: