Closed ElePT closed 2 years ago
@Zoufalc, @ebermot. If the plan is to fully deprecate this class towards a PyTorch implementation, maybe this issue should just be kept in mind to make sure it does not happen again with the new implementation, and not really spend too long on it (but still interesting).
Hi @ElePT , I'm currently working on it and I'd like to be assigned to this issue.
Closed in #405
Environment
What is happening?
When running the qGAN tutorial code with
backend=qasm_simulator
and fixed seeds (simulator/transpiler), the generator parameters do not change between iterations, nor does the relative entropy (as seen in the plot), and the final distribution does not match the results frombackend=statevector _simulator
. This is overseen by the unit tests due to the bug reported in #392.How can we reproduce the issue?
To reproduce the issue, run the notebook 04_qgans_for_loading_random_distributions.ipynb with:
instead of:
And check the plots for the relative entropy and final distribution to see that they don't match the statevector plots.
You can also note that if you do not sed the seeds of the quantum instance, i.e:
The relative entropy is not constant anymore, but the learned distribution is still not correct.
What should happen?
The results from training with different backends should match to a certain extent, and the relative entropy should not be constant.
Any suggestions?
This issue has been discussed with @Zoufalc and @ebermot, the solution is not clear. So far, we have discovered that increasing the batch size (i.e setting
N=10000
,batch_size = 5800
) and tuning the gradient optimizer with some "crazy" values, such as:helps relative entropy evolve in some way, but the distribution is not really learned properly. Other optimizers such as
COBYLA
orSPSA
do not improve this result.Another hint from @ebermot points out that maybe the gradient rule (specifically, using parameter shift instead of finite differences) could be an important factor to take into account. However, this couldn't be tested yet because of another reported bug (#394) when trying to set a custom gradient in the
QGAN
generator (and if it is a gradient issue, why does it not train properly when usingCOBYLA
???).