qiskit-community / qiskit-machine-learning

Quantum Machine Learning
https://qiskit-community.github.io/qiskit-machine-learning/
Apache License 2.0
695 stars 327 forks source link

`QGAN` generator training issue with `qasm_simulator` backend #393

Closed ElePT closed 2 years ago

ElePT commented 2 years ago

Environment

What is happening?

When running the qGAN tutorial code with backend=qasm_simulator and fixed seeds (simulator/transpiler), the generator parameters do not change between iterations, nor does the relative entropy (as seen in the plot), and the final distribution does not match the results from backend=statevector _simulator. This is overseen by the unit tests due to the bug reported in #392.

How can we reproduce the issue?

To reproduce the issue, run the notebook 04_qgans_for_loading_random_distributions.ipynb with:

# Set quantum instance to run the quantum generator
quantum_instance = QuantumInstance(
    backend=BasicAer.get_backend("qasm_simulator")
    , seed_transpiler=seed, seed_simulator=seed
)

instead of:

# Set quantum instance to run the quantum generator
quantum_instance = QuantumInstance(
    backend=BasicAer.get_backend("statevector_simulator")
   , seed_transpiler=seed, seed_simulator=seed
 )

And check the plots for the relative entropy and final distribution to see that they don't match the statevector plots.

You can also note that if you do not sed the seeds of the quantum instance, i.e:

# Set quantum instance to run the quantum generator
quantum_instance = QuantumInstance(
     backend=BasicAer.get_backend("statevector_simulator")
 )

The relative entropy is not constant anymore, but the learned distribution is still not correct.

What should happen?

The results from training with different backends should match to a certain extent, and the relative entropy should not be constant.

Any suggestions?

This issue has been discussed with @Zoufalc and @ebermot, the solution is not clear. So far, we have discovered that increasing the batch size (i.e setting N=10000, batch_size = 5800) and tuning the gradient optimizer with some "crazy" values, such as:

optimizer = ADAM(
                 maxiter=1,
                 tol=1e-6,
                 lr=1e-2,
                 beta_1=0.7,
                 beta_2=0.99,
                 noise_factor=1e-5,
                 eps=1, # not a realistic value
                 amsgrad=True,
             )

helps relative entropy evolve in some way, but the distribution is not really learned properly. Other optimizers such as COBYLA or SPSA do not improve this result.

Another hint from @ebermot points out that maybe the gradient rule (specifically, using parameter shift instead of finite differences) could be an important factor to take into account. However, this couldn't be tested yet because of another reported bug (#394) when trying to set a custom gradient in the QGAN generator (and if it is a gradient issue, why does it not train properly when using COBYLA???).

ElePT commented 2 years ago

@Zoufalc, @ebermot. If the plan is to fully deprecate this class towards a PyTorch implementation, maybe this issue should just be kept in mind to make sure it does not happen again with the new implementation, and not really spend too long on it (but still interesting).

ebermot commented 2 years ago

Hi @ElePT , I'm currently working on it and I'd like to be assigned to this issue.

adekusar-drl commented 2 years ago

Closed in #405