Open josh146 opened 2 months ago
Note that if I set diff_method="parameter-shift"
, I get a compilation error:
>>> dcost(x)
dcost:13:3: error: 'func.func' op cloned during the gradient pass is not free of quantum ops:
"func.func"() <{function_type = (tensor<6xi64>, tensor<2xf64>, index) -> tensor<?xf64>, sym_name = "cost.qgrad", sym_visibility = "private"}> ({
^bb0(%arg0: tensor<6xi64>, %arg1: tensor<2xf64>, %arg2: index):
%0 = "arith.constant"() <{value = sparse<15, -1.5707963267948966> : tensor<16xf64>}> : () -> tensor<16xf64>
...
Note that if I set
diff_method="parameter-shift"
, I get a compilation error:>>> dcost(x) dcost:13:3: error: 'func.func' op cloned during the gradient pass is not free of quantum ops: "func.func"() <{function_type = (tensor<6xi64>, tensor<2xf64>, index) -> tensor<?xf64>, sym_name = "cost.qgrad", sym_visibility = "private"}> ({ ^bb0(%arg0: tensor<6xi64>, %arg1: tensor<2xf64>, %arg2: index): %0 = "arith.constant"() <{value = sparse<15, -1.5707963267948966> : tensor<16xf64>}> : () -> tensor<16xf64> ...
@erick-xanadu I think this is related to our interface discussion on your PR, since the new gates aren't implementing one of the quantum gate interfaces, the gradient passes would need to remove them explicitly. Should be an easy fix.
The main problem mentioned here might be difficult to solve quickly, we need a way to track which gate parameters came from differentiated function arguments.
A workaround would be completing the decomposition of gates not supported for differentiation, which we want anyways. I believe that should be fairly quick to implement. While this is inefficient in some cases (unnecessary decomposing), it does match the previous behaviour for StatePrep.
@erick-xanadu I think this is related to our interface discussion, since the new gates aren't implementing the interface the gradient passes would need to remove them explicitly.
I agree. However, I don't see how we got here because verification should have caught this error similar to above.
@erick-xanadu I think this is related to our interface discussion, since the new gates aren't implementing the interface the gradient passes would need to remove them explicitly.
I agree. However, I don't see how we got here because verification should have caught this error similar to above.
I don't know if verification for parameter-shift has been implemented yet.
I don't know if verification for parameter-shift has been implemented yet.
A naive verification for parameter shift is implemented, just confirming that op.grad_method in {"A", None}
for all the operations. The more thorough verification that was discussed didn't make it in yet.
@erick-xanadu I think this is related to our interface discussion on your PR, since the new gates aren't implementing one of the quantum gate interfaces, the gradient passes would need to remove them explicitly. Should be an easy fix.
If the parameter-shift bug is an easy fix, should I split this into its own issue separate from the verification discussion?
@dime10 @erick-xanadu https://github.com/PennyLaneAI/catalyst/issues/1072
(we can treat this now as two separate bugs)
The new gradient verification is slightly too overzealous, and won't allow circuits to be differentiated even when the operation which is not supported for differentiation is not being differentiated:
In this case, it is failing to allow this circuit to pass verification even though
BasisState
is not being differentiated.Previously, this example would work fine, since
BasisState
was always being decomposed down to non-parametrizable gates (qml.X
).Note that this is currently affecting our VQE + catalyst demos, and they are no longer executable. A temporary workaround I can do is:
but this is not ideal.