tensorflow / quantum

An open-source Python framework for hybrid quantum-classical machine learning.
https://www.tensorflow.org/quantum
Apache License 2.0
1.81k stars 579 forks source link

Parametrized Quantum Circuits for Reinforcement Learning stuck if using qsim backend #766

Open jccalvojackson opened 1 year ago

jccalvojackson commented 1 year ago

I've ran this notebook with no problem. However, if I try to use qsim as a backend by passing the backend backend = qsimcirq.QSimSimulator() explicitly to ReUploadingPQC, then I get the following message

2023-04-26 09:20:03.862016: W tensorflow/core/grappler/optimizers/loop_optimizer.cc:907] Skipping loop optimization for Merge node with control input: cond/branch_executed/_12

and it gets stuck there indefinitely.

Any idea of what might be going on?

To reproduce:

replace

def generate_model_policy(
    qubits: List[cirq.GridQubit],
    n_layers: int,
    n_actions: int,
    beta: float,
    observables: List[cirq.PauliString],
) -> tf.keras.Model:
    """Generates a Keras model for a data re-uploading PQC policy."""

    input_tensor = tf.keras.Input(shape=(len(qubits),), dtype=tf.dtypes.float32, name="input")
    re_uploading_pqc = ReUploadingPQC(qubits, n_layers, observables)([input_tensor])
    process = tf.keras.Sequential(
        [Alternating(n_actions), tf.keras.layers.Lambda(lambda x: x * beta), tf.keras.layers.Softmax()],
        name="observables-policy",
    )
    policy = process(re_uploading_pqc)
    model = tf.keras.Model(inputs=[input_tensor], outputs=policy)

    return model

with

def generate_model_policy(
    qubits: List[cirq.GridQubit],
    n_layers: int,
    n_actions: int,
    beta: float,
    observables: List[cirq.PauliString],
    backend="noiseless",
) -> tf.keras.Model:
    """Generates a Keras model for a data re-uploading PQC policy."""

    input_tensor = tf.keras.Input(shape=(len(qubits),), dtype=tf.dtypes.float32, name="input")
    re_uploading_pqc = ReUploadingPQC(qubits, n_layers, observables, backend=backend)([input_tensor])
    process = tf.keras.Sequential(
        [Alternating(n_actions), tf.keras.layers.Lambda(lambda x: x * beta), tf.keras.layers.Softmax()],
        name="observables-policy",
    )
    policy = process(re_uploading_pqc)
    model = tf.keras.Model(inputs=[input_tensor], outputs=policy)

    return model

and then replace

model = generate_model_policy(qubits, n_layers, n_actions, 1.0, observables)
backend = qsimcirq.QSimSimulator()

model = generate_model_policy(qubits, n_layers, n_actions, 1.0, observables, backend=backend)

versions:

gym = "==0.22.0"
tensorflow = "2.7.0"
tensorflow-quantum = "0.7.2"
qsimcirq = "0.13.3" # <0.16.0 because of incompatibility with tensorflow-quantum
lockwo commented 1 year ago

My guess would be there is something going on with the interface here: https://github.com/tensorflow/quantum/blob/v0.7.2/tensorflow_quantum/core/ops/cirq_ops.py#L125, might be a good place to investigate more. Although I am interested in seeing why you want to use qsim as the backend when TFQ already uses qsim by default.

jccalvojackson commented 1 year ago

My guess would be there is something going on with the interface here: https://github.com/tensorflow/quantum/blob/v0.7.2/tensorflow_quantum/core/ops/cirq_ops.py#L125, might be a good place to investigate more.

thank you, ill look into it.

Although I am interested in seeing why you want to use qsim as the backend when TFQ already uses qsim by default.

It was a sanity check. I actually want to benchmark QRL using different backends, including cuQuantum. So for example I want run QRL using different QSimSimulator options like:

ops = qsimcirq.QSimOptions(gpu_mode=0, use_gpu=True, max_fused_gate_size=nfused)

ops = qsimcirq.QSimOptions(gpu_mode=0, disable_gpu=False, use_sampler=False, max_fused_gate_size=nfused)

ops = qsimcirq.QSimOptions(gpu_mode=1, use_gpu=True, max_fused_gate_size=nfused)

ops = qsimcirq.QSimOptions(gpu_mode=1, disable_gpu=False, use_sampler=False, max_fused_gate_size=nfused)

ops = qsimcirq.QSimOptions(use_gpu=False, cpu_threads=ncpu_threads, max_fused_gate_size=nfused)

ops = qsimcirq.QSimOptions(disable_gpu=True, use_sampler=False, cpu_threads=ncpu_threads, max_fused_gate_size=nfused, gpu_mode=0)

Im now trying on a gpu using these images but now I face other problems that possibly require to compile tensor flow from source.

Another question. Any particular reason TFQ has pinned dependencies (as opposed to a range), could it be used with cirq>0.13?

thank you

lockwo commented 1 year ago

GPU support for ops is super nascent (see: https://github.com/tensorflow/quantum/pull/759), I'm not 100% what state it is in so if you encounter errors, be sure to share them. Regarding pinned dependencies, there is a PR to update that (https://github.com/tensorflow/quantum/pull/697) but idk if it will happen.

jccalvojackson commented 1 year ago

thank you very much!

I've added the error I get on gpu

regarding the original issue about getting stuck on cpu mode. I've run these tests tensorflow_quantum/python/layers/high_level/controlled_pqc_test.py using the qsimcirq.QSimSimulator() and they all go through with no problem. This means it is able to construct the corresponding operator. But somehow, for the inputs of the qrl model it gets stuck whereas for the inputs of the tests it does not