Closed antalszava closed 1 day ago
This might be unavoidable if the equations defining the MottonenStatePreparation
have divergences at certain parameter values?
i.e., not sure if it's a bug or working as it should :thinking:
Yes, that could easily be the case! I'll remove the bug
label for now, can be re-added if we determine that this is really a bug
@antalszava could this is due to the derivative of abs
not being defined at 0
? This could be happening either in your cost function, or the mottonen state prep. If so, then we have two options:
abs
@DSGuala is working on a PR (https://github.com/PennyLaneAI/pennylane/pull/1144) that tweaks Mottonen when the input state is real-valued. The tweak is to remove an extra unnecessary entangling block at the end, but we may run across and potentially look at solving this issue in the PR.
This issue is due to non-differentiable points of the map from state vectors to rotation angles in MottonenStatePreparation
, as suggested by @co9olguy. For example, the code above hits the derivative of MottonenStatePreparation
at state=[0,0,1/np.sqrt(2),1/np.sqrt(2)]
, which is not defined in the current form.
We are aware of this in PennyLane and warn users about non-differentiable points in the documentation. Therefore I'm wondering whether it would make sense to close this issue?
Issue description
When optimizing on certain statevectors inputted to
MottonenStatePreparation
, running into an error withnan
values.It seems as if some elements of the gradient were nearing 0 (e.g., at one point the error seems to have arosen when the value of two elements of the gradient went from
10-e16
to10-e17
).Could be case based, hence not completely sure if this is due to an unexpected behaviour of the logic in
MottonenStatePreparation
.Expected behavior: optimization works alright
Actual behavior:
ValueError: State vector has to be of length 1.0, got Autograd ArrayBox with value nan
is raisedReproduces how often: Only for certain statevectors
System information:
Source code and tracebacks
Additional information
An alternative snippet using
torch
andautograd.set_detect_anomaly(True)
revealsRuntimeError: Function 'AbsBackward' returned nan values in its 0th output.
, probably hinting thatabs
is already getting anan
value in the backward pass.