bagmk / Quantum_Machine_Learning_Express

This project is one of the Qiskit mentorship programs to replicate two papers arXiv:1905.10876 and arXiv:2003.09887 using the Qiskit environment. We evaluate the parameterized quantum circuit, reproduce the expressibility and entangling capability of the 19 circuits, and the classification accuracy.
MIT License
46 stars 19 forks source link

Problem about PQC Four-Qubit.ipynb #16

Open theblackigod1015 opened 3 years ago

theblackigod1015 commented 3 years ago

image image

It seems this program are trying to replicate the result of the paper: Evaluation the parameterized quantum circuit. According to the paper using L1 loss, gradient descent optimizer and circuit2 to learn data 1a should got a rather good result at around 95%. But follow your program, sometimes the results are poor and the accuracy is about 50%. I have looked for the reason for days and still don't know why. Different value of a and c for learning rate seem to cause different results. I wonder if there is any code or page could illustrate it.

bagmk commented 3 years ago

Hi thank you for reaching out to me Shufeng.

What I did is changing the hyperparameter and optimize the circuits.

Training data training the circuit with the code, pick up the parameters from the model, send it to validation data. We can obtain the accuracy of the dataset. With multiple repetitions of the process with different hyperparameter searches, we selected the best accuracy model from validating data.

Then those parameters from the model are send it to the testing data, where we keep the accuracy.

Good luck!

On Mon, Aug 16, 2021 at 6:19 AM theblackigod1015 @.***> wrote:

[image: image] https://user-images.githubusercontent.com/24667906/129554506-f27edf06-367b-4274-8042-d06c86edcc13.png [image: image] https://user-images.githubusercontent.com/24667906/129555691-7751c0c1-a031-4618-973c-3190e277aeb6.png

It seems this program are trying to replicate the result of the paper: Evaluation the parameterized quantum circuit. According to the paper using L1 loss, gradient descent optimizer and circuit2 to learn data 1a should got a rather good result at around 95%. But follow your program, sometimes the results are poor and the accuracy is about 50%. I have looked for the reason for days and still don't know why. Different value of a and c for learning rate seem to cause different results. I wonder if there is any code or page could illustrate it.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/bagmk/Quantum_Machine_Learning_Express/issues/16, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOG5OXV76ECABQHDQ6HK3J3T5DX25ANCNFSM5CHQ64EQ .

-- Saesun Kim *IBM Qiskit Localization Co-Lead* & Research Assistant Center for Quantum Research and Technology Homer L. Dodge Department of Physics and Astronomy The University of Oklahoma

theblackigod1015 commented 3 years ago

Thanks for your reply. I understand what you have comment before but I still don't know why the result are bad even I followed your step. When I pick circuit14 for dataset 1a, according to the paper the accuracy should be something around 88.8%. I have no idea about the result I get and I have attach it below. The currentparams for the circuit is stable after 2-3 iteration and the loss is still high. image image

bagmk commented 3 years ago

What range of learning rates did you try?

On Mon, Aug 16, 2021 at 11:25 PM theblackigod1015 @.***> wrote:

Thanks for your reply. I understand what you have comment before but I still don't know why the result are bad even I followed your step. When I pick circuit14 for dataset 1a, according to the paper the accuracy should be something around 88.8%. I have no idea about the result I get and I have attach it below. The currentparams for the circuit is stable after 2-3 iteration and the loss is still high. [image: image] https://user-images.githubusercontent.com/24667906/129663468-3017454f-f199-428b-bc81-18249bdae5b2.png [image: image] https://user-images.githubusercontent.com/24667906/129663498-1dc48e50-9fd4-412a-ac42-bfaec6722472.png

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/bagmk/Quantum_Machine_Learning_Express/issues/16#issuecomment-899983716, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOG5OXUSBEQFE2PXM273USDT5HQFNANCNFSM5CHQ64EQ .

-- Saesun Kim *IBM Qiskit Localization Co-Lead* & Research Assistant Center for Quantum Research and Technology Homer L. Dodge Department of Physics and Astronomy The University of Oklahoma

theblackigod1015 commented 3 years ago

I used the learning rate range from 0-4 to find out the best one. Part of the result have been attach followed. The best occured when lr=3.61, so that's the lr value I pick. image

bagmk commented 3 years ago

I assume the value next to the learning rate is the accuracy?

If it is true, you have many zeros and near zero, is it right?

somehow, it converges to the other side. Because it is classification, if program is not doing anything, it should converge to 0.5 since it is converge to 0, i think it is doing some classification, but not sure why it is going the other way.

Can you plot the result with zero accuracy?

On Wed, Aug 18, 2021 at 1:24 AM theblackigod1015 @.***> wrote:

I used the learning rate range from 0-4 to find out the best one. Part of the result have been attach followed. The best occur when lr=3.61, so that's the lr value I pick.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/bagmk/Quantum_Machine_Learning_Express/issues/16#issuecomment-900852818, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOG5OXXMAEZ7ZVETAL7D4O3T5NGZZANCNFSM5CHQ64EQ .

-- Saesun Kim *IBM Qiskit Localization Co-Lead* & Research Assistant Center for Quantum Research and Technology Homer L. Dodge Department of Physics and Astronomy The University of Oklahoma

theblackigod1015 commented 3 years ago

I picked learning rate=1.0,2.25,3.24 that converge to 0 before to run the program again. Something interesting happened. For lr=1.0, the validation acc is about 75%, but for lr=2.25 or 3.24 the accuracy is 49%, which means no classification were done. I'll try more lr and send u the result. image

bagmk commented 3 years ago

I was also having some issue good accuracy.

If you go to here https://github.com/bagmk/Quantum_Machine_Learning_Express/tree/main/Machine%20Learning%20PQC/Four_Qubit_pytorch/Learning_Rate_Data

You can see my choice from the validating circuits.

On Thu, Aug 19, 2021 at 2:41 AM theblackigod1015 @.***> wrote:

I picked learning rate=1.0,2.25,3.24 that converge to 0 before to run the program again. Something interesting happened. For lr=1.0, the validation acc is about 79%, but for lr=2.25 or 3.24 the accuracy is 49%, which means no classification were done. I'll try more lr and send u the result. [image: image] https://user-images.githubusercontent.com/24667906/130028172-856652d2-5253-4f93-bdd4-19c9283290f5.png

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/bagmk/Quantum_Machine_Learning_Express/issues/16#issuecomment-901684576, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOG5OXRD7XQXCPZXV2IYRFLT5SYRFANCNFSM5CHQ64EQ .

-- Saesun Kim *IBM Qiskit Localization Co-Lead* & Research Assistant Center for Quantum Research and Technology Homer L. Dodge Department of Physics and Astronomy The University of Oklahoma