qiskit-community / qiskit-machine-learning

Quantum Machine Learning
https://qiskit-community.github.io/qiskit-machine-learning/
Apache License 2.0
695 stars 327 forks source link

Dense output of CircuitQNN yields invalid gradient #43

Closed rdisipio closed 3 years ago

rdisipio commented 3 years ago

Information

What is the current behavior?

I would like to use a CircuitQNN as an intermediate layer in a larger neural network, but I would like to treat its output as a dense N-dim vector instead of a binary single output node as presented in the examples. I get the following error:

Traceback (most recent call last):
  File "/Users/disipio/development/qrnn-qiskit-ml/./scripts/train_pos.py", line 125, in <module>
    loss.backward()
  File "/Users/disipio/development/qrnn-qiskit-ml/venv/lib/python3.9/site-packages/torch/tensor.py", line 245, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "/Users/disipio/development/qrnn-qiskit-ml/venv/lib/python3.9/site-packages/torch/autograd/__init__.py", line 145, in backward
    Variable._execution_engine.run_backward(
RuntimeError: Function _TorchNNFunctionBackward returned an invalid gradient at index 0 - got [16, 16, 4] but expected shape compatible with [5, 4]

Steps to reproduce the problem

feature_map = ZZFeatureMap(n_qubits)
ansatz = RealAmplitudes(n_qubits, reps=1, entanglement='circular', insert_barriers=True)

qc = QuantumCircuit(n_qubits)
qc.append(feature_map, range(n_qubits))
qc.append(ansatz, range(n_qubits))

qi = QuantumInstance(Aer.get_backend("statevector_simulator")

qnn = CircuitQNN(qc, 
                                input_params=feature_map.parameters, 
                                weight_params=ansatz.parameters,
                                output_shape=2**n_qubits,
                               quantum_instance=qi)

clayer_in = nn.Linear(input_size, n_qubits)
qlayer = TorchConnector(qnn)
clayer_out = nn.Linear(2**n_qubits, hidden_size)

What is the expected behavior?

Setting n_qubits = 4 I have 16 different output states. If I print out the probabilities I see e.g. this:

# print((rows, *self.output_shape), key, b, v, shots) 
(5, 16) (0, 0) 0000 0.158663744876569 1.0
(5, 16) (0, 1) 0001 0.101695232492148 1.0
(5, 16) (0, 2) 0010 0.109046751173843 1.0
(5, 16) (0, 3) 0011 0.001107903927808 1.0
(5, 16) (0, 4) 0100 0.065128779080939 1.0
(5, 16) (0, 5) 0101 0.023863282865949 1.0
(5, 16) (0, 6) 0110 0.009466232967798 1.0
(5, 16) (0, 7) 0111 0.007067217702229 1.0
(5, 16) (0, 8) 1000 0.016374389863522 1.0
(5, 16) (0, 9) 1001 0.072207907597972 1.0
(5, 16) (0, 10) 1010 0.124966708373814 1.0
(5, 16) (0, 11) 1011 0.088526459268843 1.0
(5, 16) (0, 12) 1100 0.137411385340253 1.0
(5, 16) (0, 13) 1101 0.013800193450224 1.0
(5, 16) (0, 14) 1110 0.064522821617036 1.0
(5, 16) (0, 15) 1111 0.006150989401053 1.0

probs = 
[[1.58663745e-01 1.01695232e-01 1.09046751e-01 1.10790393e-03
  6.51287791e-02 2.38632829e-02 9.46623297e-03 7.06721770e-03
  1.63743899e-02 7.22079076e-02 1.24966708e-01 8.85264593e-02
  1.37411385e-01 1.38001935e-02 6.45228216e-02 6.15098940e-03]

Suggested solutions

Not sure if the code is correct, but perhaps adding an example to the tutorials would clarify the matter.

ElePT commented 3 years ago

Hello! I have tried to reproduce this issue but I haven't had any problems using CircuitQNNs with dense output, it works fine for me. Could you please provide the full code you used to get this error? Thanks!

adekusar-drl commented 3 years ago

@rdisipio Could please try to reproduce the issue with the recent changes? If it is still in place, please report more info as @ElePT requested. Thanks.

adekusar-drl commented 3 years ago

@rdisipio Closing this issue as non-reproducible. You can re-open the issue if you still have problems. Thanks.