Open albi3ro opened 9 months ago
please let me know if i can help in this in any way.
@AnuravModak Thanks for the offer.
There may be a simpler, patch fix to this problem, but in my mind this is a symptom of a deeper structural issue, that would require a deeper structural fix.
Basically gradients take QuantumScript.trainable_params
as the source of truth as what is trainable or not, and that property defaults to "everything is trainable" if it hasn't been explicitly set.
The problem is that it's extremely hard to keep track of the how trainable_params
might get updated throughout transforms, so we just don't track how it gets transformed. compile
might eliminate parameters. decompose
and expansions might break one parameter into multiple. Tracking the indices through that process would add way too much complexity. So currently we just don't. The trainable_params
are set in the qnode and right before we hit the ML boundary.
We could change the default for trainable params to "check what's trainable", but that would be a breaking change that would require buy-in from a variety of different people.
This works with the new version of PL :)
Huh... Any idea what fixed it?
@isaacdevlugt I think this needs to be reopened as the other example:
@qml.gradients.param_shift
@qml.compile
@qml.qnode(qml.device('default.qubit'))
def circuit(x):
qml.QubitUnitary(np.eye(2), 0)
qml.RX(x, wires=0)
return qml.expval(qml.PauliZ(0))
circuit(qml.numpy.array(0.5))
Still fails with the same error message.
My bad! I was using pennylane.numpy
🤦
Expected behavior
I'd expect to get the same answer as when the
AmplitudeEmbedding
with null behavior is removed.In an ideal world, a gradient transform would be able to detect that the trainability information is out of date and recalculate it.
Actual behavior
Traceback below.
The device preprocessing has to decompose
AmplitudeEmbedding
, so it eliminates the trainablility information set at the qnode level. The default behaviour forQuantumScript.trainable_params
then assumes the resultingStatePrep
should be trainable, even though the originalAmplitudeEmbedding
wasn't.Since parameter shift cannot differentiate
StatePrep
, this causes an error.Additional information
A similar example is:
Source code
Tracebacks
System information
Existing GitHub issues