googleapis / python-aiplatform

A Python SDK for Vertex AI, a fully managed, end-to-end platform for data science and machine learning.
Apache License 2.0
618 stars 331 forks source link

Epochs with metrics = "nan" when using Vertex AI python SDK #1660

Closed hugoferrero closed 1 year ago

hugoferrero commented 2 years ago

Hi. I'm training a model by using Vertex Training service. The training is ok when i use the console ("Create" button) but when i try to train the model using the sdk, every epoch outputs "nan" in training metrics. I'm using a script from this tutorial: https://codelabs.developers.google.com/codelabs/vertex-ai-custom-models#3

This is the python script:

# This will be replaced with your bucket name after running the `sed` command in the tutorial
BUCKET = "gs://hf-exp/vpoc"

import numpy as np
import pandas as pd
import pathlib
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

print(tf.__version__)

"""## The Auto MPG dataset

The dataset is available from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/).

### Get the data
First download the dataset.
"""

"""Import it using pandas"""

dataset_path = "gs://hf-exp/vpoc/mpg/data/auto-mpg.csv"
dataset = pd.read_csv(dataset_path, na_values = "?")

shape = dataset.shape

print(shape)

dataset.tail()

"""### Clean the data

The dataset contains a few unknown values.
"""

dataset.isna().sum()

"""To keep this initial tutorial simple drop those rows."""

dataset = dataset.dropna()

"""The `"origin"` column is really categorical, not numeric. So convert that to a one-hot:"""

dataset['origin'] = dataset['origin'].map({1: 'USA', 2: 'Europe', 3: 'Japan'})

dataset = pd.get_dummies(dataset, prefix='', prefix_sep='')
dataset.tail()

"""### Split the data into train and test

Now split the dataset into a training set and a test set.

We will use the test set in the final evaluation of our model.
"""

train_dataset = dataset.sample(frac=0.8,random_state=0)
test_dataset = dataset.drop(train_dataset.index)

"""### Inspect the data

Have a quick look at the joint distribution of a few pairs of columns from the training set.

Also look at the overall statistics:
"""

train_stats = train_dataset.describe()
train_stats.pop("mpg")
train_stats = train_stats.transpose()
train_stats

"""### Split features from labels

Separate the target value, or "label", from the features. This label is the value that you will train the model to predict.
"""

train_labels = train_dataset.pop('mpg')
test_labels = test_dataset.pop('mpg')

"""### Normalize the data

Look again at the `train_stats` block above and note how different the ranges of each feature are.

It is good practice to normalize features that use different scales and ranges. Although the model *might* converge without feature normalization, it makes training more difficult, and it makes the resulting model dependent on the choice of units used in the input.

Note: Although we intentionally generate these statistics from only the training dataset, these statistics will also be used to normalize the test dataset. We need to do that to project the test dataset into the same distribution that the model has been trained on.
"""

def norm(x):
  return (x - train_stats['mean']) / train_stats['std']
normed_train_data = norm(train_dataset)
normed_test_data = norm(test_dataset)

"""This normalized data is what we will use to train the model.

Caution: The statistics used to normalize the inputs here (mean and standard deviation) need to be applied to any other data that is fed to the model, along with the one-hot encoding that we did earlier.  That includes the test set as well as live data when the model is used in production.

## The model

### Build the model

Let's build our model. Here, we'll use a `Sequential` model with two densely connected hidden layers, and an output layer that returns a single, continuous value. The model building steps are wrapped in a function, `build_model`, since we'll create a second model, later on.
"""

def build_model():
  model = keras.Sequential([
    layers.Dense(64, activation='relu', input_shape=[len(train_dataset.keys())]),
    layers.Dense(64, activation='relu'),
    layers.Dense(1)
  ])

  optimizer = tf.keras.optimizers.RMSprop(0.001)

  model.compile(loss='mse',
                optimizer=optimizer,
                metrics=['mae', 'mse'])
  return model

model = build_model()

"""### Inspect the model

Use the `.summary` method to print a simple description of the model
"""

model.summary()

"""Now try out the model. Take a batch of `10` examples from the training data and call `model.predict` on it.

It seems to be working, and it produces a result of the expected shape and type.

### Train the model

Train the model for 1000 epochs, and record the training and validation accuracy in the `history` object.

Visualize the model's training progress using the stats stored in the `history` object.

This graph shows little improvement, or even degradation in the validation error after about 100 epochs. Let's update the `model.fit` call to automatically stop training when the validation score doesn't improve. We'll use an *EarlyStopping callback* that tests a training condition for  every epoch. If a set amount of epochs elapses without showing improvement, then automatically stop the training.

You can learn more about this callback [here](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping).
"""

model = build_model()

EPOCHS = 10

# The patience parameter is the amount of epochs to check for improvement
early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)

early_history = model.fit(normed_train_data, train_labels, 
                    epochs=EPOCHS, validation_split = 0.2, 
                    callbacks=[early_stop])

# Export model and save to GCS
model.save(BUCKET + '/mpg/model')

I'm trying to train this model by using the SDK ("CustomContainerTrainingJob" and "CustomTrainingJob" - version google-cloud-aiplatform = 1.13.0). The logs, in both cases, is the same:

l1

This is the code in every case:

def create_training_pipeline_custom_container_job_sample(
    project: str,
    location: str,
    staging_bucket: str,
    display_name: str,
    container_uri: str,
    model_serving_container_image_uri: Optional[str] = None,
    dataset_id: Optional[str] = None,
    model_display_name: Optional[str] = None,
    args: Optional[List[Union[str, float, int]]] = None,
    replica_count: int = 1,
    machine_type: str = "n1-standard-4",
    accelerator_type: str = "ACCELERATOR_TYPE_UNSPECIFIED",
    accelerator_count: int = 0,
    training_fraction_split: float = 0.8,
    validation_fraction_split: float = 0.1,
    test_fraction_split: float = 0.1,
    sync: bool = True,
):
    aiplatform.init(project=project, location=location, staging_bucket=staging_bucket)

    job = aiplatform.CustomContainerTrainingJob(
        display_name=display_name,
        container_uri=container_uri,
        model_serving_container_image_uri=model_serving_container_image_uri,
    )

    # This example uses an ImageDataset, but you can use another type
    dataset = aiplatform.ImageDataset(dataset_id) if dataset_id else None

    model = job.run(        

        replica_count=replica_count,
        machine_type=machine_type

    )

   model.wait()

    print(model.display_name)
    print(model.resource_name)
    print(model.uri)
    return model

create_training_pipeline_custom_container_job_sample(project = "teco-prod-adam-dev-826c", location = "us-east4", staging_bucket = "gs://hf-exp/vpoc/mpg/stg_bkt" ,display_name = "p8", 
container_uri = "gcr.io/teco-prod-adam-dev-826c/mpg:v3")
def custom_training_job_sample(
    project: str,
    location: str,
    bucket: str,
    display_name: str,
    script_path: str,
    container_uri: str,
    replica_count: int,
    model_serving_container_image_uri: Optional[str] = None
):
    aiplatform.init(project=project, location=location, staging_bucket=bucket)

    job = aiplatform.CustomTrainingJob(
        display_name=display_name,
        script_path=script_path,
        container_uri=container_uri,

    )

    model = job.run(
         replica_count=replica_count
    )

    return model

custom_training_job_sample(project = "teco-prod-adam-dev-826c", location = "us-east4", bucket = "gs://hf-exp/vpoc/mpg/stg_bkt" ,display_name = "p11", 
script_path = "train.py", container_uri = "us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-9:latest", replica_count = 1)

And this is the log i get when using console ("Create button"): l2

Any suggestions?...Thanks in advance.

rosiezou commented 1 year ago

@TheMichaelHu is the code owner of CustomContainerTrainingJob. Michale, PTAL.

sasha-gitg commented 1 year ago

@hugoferrero Please share the underlying resource protos for both jobs and indicate the source of creation for each. You can get these through the SDK:

aiplatform.CustomContainerTrainingJob.get(resource_name).gca_resource
hugoferrero commented 1 year ago

Hi @sasha-gitg . Problem was solved; In the case create_training_pipeline_custom_container_job_sample i upgraded the tf version image in container to TF 2.9 and, in the case custom_training_job_sample, problem solved itself. Thank you anyway