tensorflow / tfx

TFX is an end-to-end platform for deploying production ML pipelines
https://tensorflow.org/tfx
Apache License 2.0
2.11k stars 707 forks source link

R2Score Metric is incompatible with Evaluator Component #6817

Open TomsCodingCode opened 4 months ago

TomsCodingCode commented 4 months ago

If the bug is related to a specific library below, please raise an issue in the respective repo directly:

TensorFlow Data Validation Repo

TensorFlow Model Analysis Repo

TensorFlow Transform Repo

TensorFlow Serving Repo

System information

Describe the expected behavior The Evaluator evaluates the model with all stadard metrics

Standalone code to reproduce the issue

import os

import pandas as pd
import tensorflow_model_analysis as tfma
import tfx.v1 as tfx

try:
  url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
  column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower', 'Weight',
                'Acceleration', 'Model_Year', 'Origin']

  dataset = pd.read_csv(url, names=column_names,
                          na_values='?', comment='\t',
                          sep=' ', skipinitialspace=True)
  dataset = dataset.dropna()
  dataset -= dataset.mean()
  dataset /= dataset.std()
  os.mkdir('./data')
  dataset.to_csv('data/data.csv', index=False)
except:
  pass

with open('./trainer.py', 'w') as f:
  f.write("""

import keras
import tensorflow as tf
import tfx.v1 as tfx
from tensorflow_metadata.proto.v0 import schema_pb2
from tfx_bsl.public import tfxio

column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower', 'Weight',
                'Acceleration', 'Model_Year', 'Origin']

def _input_fn(file_pattern: list[str],
              data_accessor: tfx.components.DataAccessor,
              schema: schema_pb2.Schema,
              batch_size: int = 200) -> tf.data.Dataset:
  # from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple#create_a_pipeline

  return data_accessor.tf_dataset_factory(
      file_pattern,
      tfxio.TensorFlowDatasetOptions(
          batch_size=batch_size, label_key='MPG'),
      schema=schema).repeat()

def run_fn(fn_args: tfx.components.FnArgs):
  train_dataset = _input_fn(
      fn_args.train_files,
      fn_args.data_accessor,
      tfx.utils.parse_pbtxt_file(fn_args.schema_path, schema_pb2.Schema()))

  layers = [keras.Input(shape=(1,), name=n) for n in column_names if n != 'MPG']

  linear_model = keras.layers.concatenate(layers)

  linear_model = keras.Model(inputs=layers, outputs=[keras.layers.Dense(units=1)(linear_model)])

  linear_model.compile(
    optimizer=keras.optimizers.Adam(learning_rate=0.1),
    loss='mean_absolute_error',
    metrics=[keras.metrics.R2Score()]
    )

  linear_model.fit(
    train_dataset,
    steps_per_epoch=1000,
    epochs=1)

  linear_model.save(fn_args.serving_model_dir, save_format='tf')

""")

example_gen = tfx.components.CsvExampleGen(input_base='data')
statistics_gen = tfx.components.StatisticsGen(examples=example_gen.outputs['examples'])
schema_gen = tfx.components.SchemaGen(statistics=statistics_gen.outputs['statistics'])

trainer = tfx.components.Trainer(
  module_file='trainer.py',
  examples=example_gen.outputs['examples'],
  schema=schema_gen.outputs['schema']
)

evaluator = tfx.components.Evaluator(
  examples=example_gen.outputs['examples'],
  schema=schema_gen.outputs['schema'],
  model=trainer.outputs['model'],
  eval_config=tfma.EvalConfig(
    model_specs=[tfma.ModelSpec(label_key='MPG')],
    slicing_specs=[tfma.SlicingSpec()],
    metrics_specs=tfma.metrics.default_regression_specs()
  )
)

pipeline = tfx.dsl.Pipeline(
  pipeline_name='min',
  pipeline_root='min',
  metadata_connection_config=tfx.orchestration.metadata
    .sqlite_metadata_connection_config('mlmd.db'),
  components=[
    example_gen, statistics_gen, schema_gen,
    trainer,
    evaluator],
  enable_cache=False)

tfx.orchestration.LocalDagRunner().run(pipeline)

Providing a bare minimum test case or step(s) to reproduce the problem will greatly help us to debug the issue. If possible, please share a link to Colab/Jupyter/any notebook.

Name of your Organization (Optional)

Other info / logs I went down a debugging rabbithole myself and I think the issue is that the metrics container used by the model does not build the metrics it contains after being loaded from memory. For most metrics this is fine, but the R2 Score adds some weights during the build function and those are missing, as can be seen in the error message. You calledset_weights(weights)on layer "r2_score" with a weight list of length 5, but the layer was expecting 1 weights.

Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

lego0901 commented 4 months ago

Hi @TomsCodingCode, thanks for using TFX and report the issue with a concrete standalone example!

The Evaluator standard component redirects its implementation into TFMA packages. Although, I am not an expert on TFMA, at my glance, the issue arises because the tf.keras.metrics.R2Score metric stores multiple internal variables (like sum of squares, sample count, etc.) However, TFMA usually expects metrics to have a simple, single-value state for serialization and aggregation. This mismatch causes the error you encountered.

To resolve this, you can use a custom metric wrapper called R2ScoreWrapper. This wrapper encapsulates the complex internal state of R2Score and exposes only the final value to TFMA, making it compatible with TFMA's serialization and aggregation mechanisms.

class R2ScoreWrapper(tf.keras.metrics.Metric):
  def __init__(self, name="r2_score_wrapper", **kwargs):
    super().__init__(name=name, **kwargs)
    self.r2_score = tf.keras.metrics.R2Score()

  def update_state(self, y_true, y_pred, sample_weight=None):
    self.r2_score.update_state(y_true, y_pred, sample_weight)

  def result(self):
    return self.r2_score.result()

  def reset_state(self):
    self.r2_score.reset_state()

...
  linear_model.compile(
    optimizer=keras.optimizers.Adam(learning_rate=0.1),
    loss='mean_absolute_error',
    metrics=[R2ScoreWrapper()]
    # metrics=[tf.keras.metrics.R2Score()]
    )

I found the similar phenomenon appears to the metrics.F1Score too. Hope that it works as you wish. Thanks!

TomsCodingCode commented 4 months ago

That works perfectly fine as a workaround, thanks!

Is this a bug in tfma then or is this expected baviour?