Closed yoichiiz2512 closed 2 years ago
Thanks for bringing this to our attention @yoichiiz2512 .
We're looking into the problem, and in the meantime you can use diff_method="backprop"
, as that seems to show decreasing losses with the code you provided.
The problem may have something to do with second derivatives, as setting
diff_method='parameter-shift',
max_diff=2
also gives a decreasing loss.
@yoichiiz2512 The problem does indeed seem to be second derivatives. Adjoint differentiation only works for first-order derivatives. For parameter shift, second-order derivatives have to be manually requested with max_diff=2
.
Switching loss
to normal_lost
instead of derived_loss
, I once again see a decreasing loss. That's because we are only using first derivatives, instead of derivatives of derivatives.
Thank you very much.
As you replied, specifying backprop
or parameter-shift
+max_diff=2
works as desired.
It was not a bug but an error in specifying the parameters.
Expected behavior
(The above is an example of a decreasing loss value)
Actual behavior
(The above values vary according to random numbers)
Additional information
I was trying to implement a Physics Informed Neural Network and had PyTorch autograd calculate the differential values. It worked with a classical neural network, but with PennyLane it seemed to be unable to approximate the differential equation.
Source code
Tracebacks
System information
Existing GitHub issues