google / qhbm-library

Quantum Hamiltonian-Based Models built on TensorFlow Quantum
https://qhbm-library.readthedocs.io/en/latest/
Apache License 2.0
40 stars 15 forks source link

Performance issue in baselines/train.py #246

Open DLPerf opened 1 year ago

DLPerf commented 1 year ago

Hello! Our static bug checker has found a performance issue in baselines/train.py: train_step is repeatedly called in a for loop, but there is a tf.function decorated function train_inner_step defined and called in train_step.

In that case, when train_step is called in a loop, the function train_inner_step will create a new graph every time, and that can trigger tf.function retracing warning.

Similarly, train_inner_step is defined in train_model and the outer function is repeatedly called here and here.

Here is the tensorflow document to support it.

Briefly, for better efficiency, it's better to use:

@tf.function
def inner():
    pass

def outer():
    inner()  

than:

def outer():
    @tf.function
    def inner():
        pass
    inner()

Looking forward to your reply.

DLPerf commented 1 year ago

But there are some variables in the inner function depending on the outer function, code may be more complex if changes are made. Is it necessary? Do you have any idea?