Closed Qottmann closed 1 year ago
I think ParametrizedEvolution
is so far the only operation where this is relevant.
One way to see this is with the following example:
def wrapper(T):
def wrapped(p, t):
p0, p1 = p
f0 = qml.pulse.pwc(T)(p0, t)
f1 = qml.pulse.pwc(T)(p1, t)
return f0 * f1
return wrapped
p0 = jnp.ones(10, dtype=float)
p1 = jnp.ones(5, dtype=float)
params = (p0, p1)
print("calling the function works: ", wrapper(T=10.)(params, 0.5))
H_pulse = wrapper(T=10.) * qml.PauliX(0)
print(qml.pulse.ParametrizedEvolution(H_pulse, [params], 10.))
as it raises the warning
/anaconda3/envs/pennylane/lib/python3.8/site-packages/numpy/core/fromnumeric.py:3156: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
return asarray(a).ndim
Note that the underlying issue was only recently introduced in https://github.com/PennyLaneAI/pennylane/pull/3659
I think that this is surfacing an abstraction inconsistency/gap. We should make a solid decision on what structure operation parameters may have.
This could be solved in the medium term by overriding ParametrizedEvolution._check_batching
.
We assume that data
is a list of trainable arrays in the default implementation in Operator
, but that behaviour can be overridden and customized. We don't have to use default implementations if the behaviour conforms to the stated interface.
Nice suggestion!
I think when we want gradient transform integration, we might need to demand well-shaped arrays in data
in order to allow for processing with, say, tensordot
and the like. But in order to allow the operation itself (without gradients) to work with more flexible parameters, I think this is a great solution :)
awesome thanks @albi3ro !
Closing this as we have found sufficient workarounds while also noting that having deeper nested structures would also create problems with gradient transforms. So if we want to allow for deeper nesting it would be a bigger operation on PennyLane.
In the
__init__
ofOperator
, the_check_batching
method is runningqml.math.ndim(p) for p in params
(see pennylane/operation.py#L1025), thereby implicitly casting all elements ofparams
to ndarrays.This is a problem for operators with parameters that are general pytrees / nested lists / tuples.
Is there a way to change this behavior without being too invasive? (this is all the way down to the base class)
This is relevant for https://github.com/PennyLaneAI/pennylane/pull/3859
edit: more generally, it seems that everything below the first level of
parameters
is assumed to beTensorLike
, see pennylane/operation.py#L1007