Open stroblme opened 1 year ago
The current workaround is to iterate over the batch dimension within the forward
method, execute the circuit multiple times and then combine the output. This works but is obviously a bottleneck as batches could be executed in parallel.
Furthermore this feels super hacky and, from what I know, is actually an issue with the Pennylane framework.
Describe the bug The current implementation runs fully analytically which is motivated by faster simulation times and by the fact, that the effect of shots are not the focus of interest within this project (tbd.). However, when trying to enable shots, Pennylane tries to artifically sample from the probabilities received from the quantum circuit output. Within a hybrid setup (with batches) this usually has the dimension of
[B, N]
where N=2^n and B being the batch size. However, withoutbatch_input
decorator, the Pennylane sampling devices seems to not expect this additional batch dimension and therefore complains about B being not equal 2^n. Adding the decorator however requires settingarbnum
to the parameter index that contains the non-trainable weights, i.e. the input. In a hybrid setup however, the input is just the output of the preceding classical module and therefore has arequires_grad
flag, i.e. is trainable, which is why adding this decorator fails.