Closed poig closed 1 year ago
This example passes 3-qubits circuits to the simulator. GPU is not good at simulating small qubits circuits because of GPU's overheads. Aer supports batching multiple shots for small circuits to accelerate on GPU, but does not support batching multiple circuits (this case passes multiple circuits to Aer) currently. I would like to think how we can speed up this kind of problem on GPU.
Let me close this issue since mostly one month was passed after @doichanj commented. Please create a new issue if need more clarification.
Informations
What is the current behavior?
Aer seems not using GPU full-clock speed, it supposes use full 2520mhz, but it only uses 300-400mhz when train pytorch Neural network even trains slower than using CPU(13700k). it used to train 1 min each epoch, but now it needs around 10min, CPU need 9min.
Steps to reproduce the problem
I can't reproduce the problem because I realize it after I retrain pytorch hybrid model and meet training time going up significantly, so I run another test to compare GPU and CPU speed.
I also run this:
output:
What is the expected behavior?
GPU should have significant improvement in training time since I am using RTX4080.
Suggested solutions
not sure what happen, but I guess something wrong under the hood since I have no such issue with pennylane and pytorch pre-trained model.
Any suggestion will be appreciated!!!