Closed Hanrui-Wang closed 1 year ago
this sounds interesting, so i will use the Hadamard test to estimate the expectation values and obtain the cost, then calculate the gradient and feed it to the classical optimizer, right? , it's my first time doing this excuse me.
Hi moustafa7zada,
Yes, that's right, you may also refer to the hadamard test supported in pennylane
https://pennylane.ai/qml/demos/tutorial_vqls.html#id2 https://docs.pennylane.ai/en/stable/code/api/pennylane.gradients.hadamard_grad.html https://snyk.io/advisor/python/PennyLane/functions/pennylane.Hadamard
hey @Hanrui-Wang, so i have some quires i was hoping you can help me with, 1- as i have seen, the Hadamard test is not really a "gradient estimation method" as it is just a way for calculating the expectation value then taking the gradient(as i saw in Pennylane's repo), so does it count even as a NEW METHODE? 2-the parameter shift rule is indeed a good method to add, but its already implemented in the repo and it's pretty well written ^_^ 3- i saw recently a very efficient way for calculating the gradient, but it only works on classical simulators, should i work on that?
Hi moustafa7zada,
Thank you for your interest in this issue. For you questions:
let me know if you have any questions!
Hi @Hanrui-Wang, Can you guide me on where the backpropagation is implemented or in which file gradients are calculated?
Hi Gopal-Dahale,
The back propagation is actually a built in functionality of PyTorch using autograd, so basically if the computation is implemented with pytorch data structure and operation they can perform gradient backward propagation.
Hi Hanrui, this looks interesting
hi @Hanrui-Wang , well i just noticed that someone was faster than me :) , congrats tho
Congrats @dtlics and @Micheallscarn for merging the pull request! I will close this issue for now since the UnitaryHack comes to an end but feel free to continue refine the parameter shift and hadamard test gradient estimations. Thank you for your contributions!
@Hanrui-Wang should this issue be split between @dtlics and @Micheallscarn? I only see one PR attached to the issue.
Hi @natestemen,
Yes, we have two PR #151 from @dtlics and #155 from @Micheallscarn for this issue. Thanks!
We need your help on enriching the gradient computation methods in torchquantum to facilitate the research on parameterized quantum circuits. Right now torchquantum supports backpropagation to obtain the gradient. How about other method such as finite differentiate, parameter shift, and hadamard test, etc?
Please help implement other gradient estimation methods in torchquantum.
hadamard test details: https://en.wikipedia.org/wiki/Hadamard_test_(quantum_computation)
Please don't hesitate to ask and discuss here for any questions!