google / qhbm-library

Quantum Hamiltonian-Based Models built on TensorFlow Quantum
https://qhbm-library.readthedocs.io/en/latest/
Apache License 2.0
40 stars 15 forks source link

Utility function tests #153

Closed zaqqwerty closed 2 years ago

zaqqwerty commented 2 years ago

Add tests that check both inside and outside eager mode.

Part of #136. Wrote a decorator for test functions that runs the function both inside and outside eager mode. Decorated all the utils tests with it.

zaqqwerty commented 2 years ago

@farice wondering if you have any ideas on the TODO I added in this PR? It's most of the reason the PR is so short, that error shows up when I try to decorate a lot of other tests in other modules.

farice commented 2 years ago

I think it's because we wouldn't usually create a Keras model/layer within a tf.function. For example, if you do this with an optimizer you may get a similar issue https://stackoverflow.com/questions/68382842/tensorflow-cannot-find-variables-after-values-change

Hence, for example, with the energy_model_utils.VariableDot(..) construction, I suppose we would want to pull this out and only decorate a subroutine of the existing test case?

zaqqwerty commented 2 years ago

After pulling the layer creation into setUp and localizing the decorator, got a different error:

value = <tf.Tensor: shape=(1, 3), dtype=float32, numpy=array([[1., 5., 9.]], dtype=float32)>, dtype = None, name = None
as_ref = False, preferred_dtype = None, dtype_hint = None
ctx = <tensorflow.python.eager.context.Context object at 0x7f276fe30be0>
accepted_result_types = (<class 'tensorflow.python.framework.ops.Tensor'>,)

    @trace.trace_wrapper("convert_to_tensor")
    def convert_to_tensor(value,
                          dtype=None,
                          name=None,
                          as_ref=False,
                          preferred_dtype=None,
                          dtype_hint=None,
                          ctx=None,
                          accepted_result_types=(Tensor,)):
      """Implementation of the public convert_to_tensor."""
      # TODO(b/142518781): Fix all call-sites and remove redundant arg
      preferred_dtype = preferred_dtype or dtype_hint
      if isinstance(value, EagerTensor):
        if ctx is None:
          ctx = context.context()
        if not ctx.executing_eagerly():
          graph = get_default_graph()
          if not graph.building_function:
>           raise RuntimeError("Attempting to capture an EagerTensor without "
                               "building a function.")
E           RuntimeError: Attempting to capture an EagerTensor without building a function.

../../../.cache/pypoetry/virtualenvs/qhbmlib-yboeJX8V-py3.9/lib/python3.9/site-packages/tensorflow/python/framework/ops.py:1580: RuntimeError

Not sure what it means yet, but the graph attribute being false means it doesn't represent a function? https://www.tensorflow.org/api_docs/python/tf/Graph#attributes

farice commented 2 years ago

Figured it out - the kernel var need to be constructed using self.add_weight within the scope of build. Using tf.Variable is fine in __init__

https://keras.io/guides/making_new_layers_and_models_via_subclassing/

zaqqwerty commented 2 years ago

Still can't get it to work when I initialize the variable in the __init__

farice commented 2 years ago

When i got it to work I used self.add_weight, keeping things in build.

Sent from my iPhone

On Jan 27, 2022, at 7:17 PM, Antonio Martinez @.***> wrote:

 Still can't get it to work when I initialize the variable in the init

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.

farice commented 2 years ago

Here's what I had, for completeness:

class VariableDot(tf.keras.layers.Layer):
  """Utility layer for dotting input with a same-sized variable."""

  def __init__(self,
               initializer: tf.keras.initializers.Initializer = tf.keras
               .initializers.RandomUniform()):
    """Initializes a VariableDot layer.

    Args:
      initializer: A `tf.keras.initializers.Initializer` which specifies how to
        initialize the values of the parameters.
    """
    super().__init__()
    self._initializer = initializer

  def build(self, input_shape):
    """Initializes the internal variables."""
    self.kernel = self.add_weight(
        shape=(input_shape[-1],),
        initializer="random_uniform",
        trainable=True)

  def call(self, inputs):
    """Returns the dot product between the inputs and this layer's variables."""
    return tf.reduce_sum(inputs * self.kernel, -1)
@tfp_test_util.test_all_tf_execution_regimes
class VariableDotTest(tfp_test_util.TestCase):
  """Tests the VariableDot layer."""

  def test_layer(self):
    """Confirms the layer dots with inputs correctly."""

    constant = 2.5
    actual_layer_constant = energy_model_utils.VariableDot(
        tf.keras.initializers.Constant(constant))
    actual_layer_constant.build([None, 3])

    actual_layer_default = energy_model_utils.VariableDot()
    actual_layer_default.build([None, 3])

    inputs_list = [[1, 5, 9]]
    inputs = tf.constant(inputs_list, dtype=tf.float32)

    actual_outputs = actual_layer_constant(inputs)
    expected_outputs = tf.math.reduce_sum(inputs * constant, -1)
    self.assertAllEqual(actual_outputs, expected_outputs)

    actual_outputs = actual_layer_default(inputs)
    expected_outputs = []
    for inner in inputs_list:
      inner_sum = []
      for i, k in zip(inner, actual_layer_default.kernel.numpy()):
        inner_sum.append(i * k)
      expected_outputs.append(sum(inner_sum))
    self.assertAllClose(actual_outputs, expected_outputs)
zaqqwerty commented 2 years ago

@farice wrote up the decorator we discussed, seems to finally be working this way.