Closed jonasbrami closed 3 years ago
Hi @jonasbrami, thank you for catching this! I could reproduce the issue based on your suggestion and it seems that the issue is with computing the gradient of the
difference = tf.reduce_sum(tf.abs(ket - target_state))
line in the cost function. Changing the absolute value computation such that a relatively small constant is added and having tf.abs(ket - target_state + 1e-10)
makes the resulting nan
values disappear.
We're considering a fitting fix here.
Hi @jonasbrami, we've updated the example, it should work now for multiple modes. Let us know if something else comes up :slightly_smiling_face:
Issue description
The quantum_neural_network.py example output is Nan when using more than 1 mode. (working as expected for 1 mode)
Expected behavior: Not Nan
Actual behavior:
Beginning optimization Rep: 0 Cost: 7.0005 Fidelity: 0.0000 Trace: 1.0000 Rep: 1 Cost: nan Fidelity: nan Trace: nan Rep: 2 Cost: nan Fidelity: nan Trace: nan Rep: 3 Cost: nan Fidelity: nan Trace: nan
Reproduces how often: 100%
System information:
Python version: 3.8.5 Platform info: Linux-5.8.0-53-generic-x86_64-with-glibc2.10 Installation path: /home/jonas/anaconda3/envs/strawberry/lib/python3.8/site-packages/strawberryfields Strawberry Fields version: 0.17.0 Numpy version: 1.19.2 Scipy version: 1.4.1 SymPy version: 1.7.1 NetworkX version: 2.5 The Walrus version: 0.14.0 Blackbird version: 0.3.1-dev TensorFlow version: 2.2.0