PennyLaneAI / comments

0 stars 0 forks source link

blog/2024/03/benchmarking-near-term-quantum-algorithms-is-an-art/ #19

Open utterances-bot opened 2 weeks ago

utterances-bot commented 2 weeks ago

Benchmarking near-term quantum algorithms is an art | PennyLane Blog

Scientific literature often describes quantum machine learning models outperforming classical models. In our most recent paper, we show how this view is misleading — there is a subtle art to benchmarking near-term quantum algorithms, and quantum doesn't always beat classical.

https://pennylane.ai/blog/2024/03/benchmarking-near-term-quantum-algorithms-is-an-art/

melinda-quigg commented 2 weeks ago

I have tried to reproduce the paper and it hangs due to deadlock issues. I was using the latest version on github, although I see references to use version 0.1 for reproducing the paper, but I could not figure out which commit this is in reference to. Any information would be great! Also apologies if this is not the correct location to add this type of comment.

josephbowles commented 2 weeks ago

Hi Melinda, the latest version on main is indeed the one you should be using; I've added a tag as v0.1 now, thanks for pointing that out. Where is the code hanging? Are you able to install the package for example?

melinda-quigg commented 2 weeks ago

Hi Joseph,

Thanks for getting back to me so quickly!

I am trying to reproduce the results from the paper for the IQPKernelClassifier and the ProjectedQuantumKernel. I have followed the instructions in the READ_ME file for installing the environment and reproducing the csv files. I have tried running a few of the datasets on both models, and had them appear to hang for different reasons (one has a warning about not supporting windows during initialization, and the other warns about a deadlock issue).

I have attached a Jupyter notebook to show all of the steps and error messages, but if you can't open this file for security reasons, I can paste a screenshot of what has gone wrong.

Note, the csv file came from one of the generated files from my team member, but he had renamed it test.csv so I have to find out which file he renamed (it is also attached to this email). However, I had tried running these models on some of the csv from your paper and had the same issues occurring.

Please note: Running from a Windows 11 laptop in the Pycharm Professional environment. Other members were running from Qbraid from their laptops and had the same results where it appeared to hang in the exact same way. It is possible that these results take hours to run a single file, and if that is the case, perhaps it is not hanging at all and we simply killed it too early after 45min.

Thank you, Melinda

On Thu, Jun 13, 2024 at 2:47 AM josephbowles @.***> wrote:

Hi Melinda, the latest version on main is indeed the one you should be using; I've added a tag as v0.1 now, thanks for pointing that out. Where is the code hanging? Are you able to install the package for example?

— Reply to this email directly, view it on GitHub https://github.com/PennyLaneAI/comments/issues/19#issuecomment-2165025923, or unsubscribe https://github.com/notifications/unsubscribe-auth/BH4224PAF5PMPEBLVMMZ6WLZHFMBFAVCNFSM6AAAAABJGY3HMGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNRVGAZDKOJSGM . You are receiving this because you commented.Message ID: @.***>

melinda-quigg commented 2 weeks ago

I replied via email and I don't think my files uploaded, so I've zipped them to this comment. Thanks! benchmarking_files.zip

josephbowles commented 2 weeks ago

The reason the code is hanging is likely due to the fact the hyperparameter search takes significantly longer than 45 minutes. From your screenshot we see it takes around 60 seconds ("Time single run 60.26..."). The hyperpameter grid you are searching over has 3 x 4 x 3 x 3 unique combinations, and each of these combinations involves fitting the model 5 times due to cross validation. So in total you need to fit 540 fits, which will take roughly 9 CPU hours. Does the code finish for smaller models (say 2 or 3 qubits)?

For the other issues I'm not so sure (I'm using Mac). Is it JAX that is throwing the error message or some other package, and can you post it here?

melinda-quigg commented 2 weeks ago

Thanks for the detailed response, that totally makes sense. I did try running it again yesterday for over 10hrs and it was still going and using up most of my memory, so I may not have a powerful enough laptop to run such a large dataset. I am trying to find a dataset that would only use 2 or 3 qubits, but have not found one yet. Are there any that you would recommend from the paper? Thanks!

josephbowles commented 1 week ago

I would check out the datasets generating functions in src/qml_benchmarks/data. If you import the functions there you can generate datasets with fewer features (thus fewer qubits). Alternatively, you can run the scripts in paper/benchmarks to generate identical datasets to the ones used in the paper.