mit-han-lab / torchquantum

A PyTorch-based framework for Quantum Classical Simulation, Quantum Machine Learning, Quantum Neural Networks, Parameterized Quantum Circuits with support for easy deployments on real quantum computers.
https://torchquantum.org
MIT License
1.28k stars 191 forks source link

support more gradient estimation methods #133

Closed Hanrui-Wang closed 1 year ago

Hanrui-Wang commented 1 year ago

We need your help on enriching the gradient computation methods in torchquantum to facilitate the research on parameterized quantum circuits. Right now torchquantum supports backpropagation to obtain the gradient. How about other method such as finite differentiate, parameter shift, and hadamard test, etc?

Please help implement other gradient estimation methods in torchquantum.

hadamard test details: https://en.wikipedia.org/wiki/Hadamard_test_(quantum_computation)

Please don't hesitate to ask and discuss here for any questions!

moustafa7zada commented 1 year ago

this sounds interesting, so i will use the Hadamard test to estimate the expectation values and obtain the cost, then calculate the gradient and feed it to the classical optimizer, right? , it's my first time doing this excuse me.

Hanrui-Wang commented 1 year ago

Hi moustafa7zada,

Yes, that's right, you may also refer to the hadamard test supported in pennylane

https://pennylane.ai/qml/demos/tutorial_vqls.html#id2 https://docs.pennylane.ai/en/stable/code/api/pennylane.gradients.hadamard_grad.html https://snyk.io/advisor/python/PennyLane/functions/pennylane.Hadamard

moustafa7zada commented 1 year ago

hey @Hanrui-Wang, so i have some quires i was hoping you can help me with, 1- as i have seen, the Hadamard test is not really a "gradient estimation method" as it is just a way for calculating the expectation value then taking the gradient(as i saw in Pennylane's repo), so does it count even as a NEW METHODE? 2-the parameter shift rule is indeed a good method to add, but its already implemented in the repo and it's pretty well written ^_^ 3- i saw recently a very efficient way for calculating the gradient, but it only works on classical simulators, should i work on that?

Hanrui-Wang commented 1 year ago

Hi moustafa7zada,

Thank you for your interest in this issue. For you questions:

  1. Yes, you may provide a function that calculates expectation with the hadamard test method, and then take the gradients like the pennylane.
  2. We are looking for a more general interface for using the parameter shift rule method, say, a function that take arbitrary circuit with specified trainable parameters and use param shift to obtain gradients.
  3. Yes, this new method is also good to implement, the basic idea of this issue is to enrich the methods of gradient calculations.

let me know if you have any questions!

Gopal-Dahale commented 1 year ago

Hi @Hanrui-Wang, Can you guide me on where the backpropagation is implemented or in which file gradients are calculated?

Hanrui-Wang commented 1 year ago

Hi Gopal-Dahale,

The back propagation is actually a built in functionality of PyTorch using autograd, so basically if the computation is implemented with pytorch data structure and operation they can perform gradient backward propagation.

dtlics commented 1 year ago

Hi Hanrui, this looks interesting

Micheallscarn commented 1 year ago

hi @Hanrui-Wang , well i just noticed that someone was faster than me :) , congrats tho

Hanrui-Wang commented 1 year ago

Congrats @dtlics and @Micheallscarn for merging the pull request! I will close this issue for now since the UnitaryHack comes to an end but feel free to continue refine the parameter shift and hadamard test gradient estimations. Thank you for your contributions!

natestemen commented 1 year ago

@Hanrui-Wang should this issue be split between @dtlics and @Micheallscarn? I only see one PR attached to the issue.

Hanrui-Wang commented 1 year ago

Hi @natestemen,

Yes, we have two PR #151 from @dtlics and #155 from @Micheallscarn for this issue. Thanks!