awslabs / gluonts

Probabilistic time series modeling in Python
https://ts.gluon.ai
Apache License 2.0
4.44k stars 741 forks source link

NaN gradient in MQCNN when use GPU but works fine for CPU #1904

Open clianga opened 2 years ago

clianga commented 2 years ago

I'm using Sagemaker Studio to train a MQCNN model, under default layer settings model runs without any error using CPU instance. But once I switched to 'ml.p3.2xlarge' instance and change ctx from 'cpu' to 'gpu', the loss in each Epoch becomes NaN and the training process stopped. I saw a similar issue on 501 but it seems never been solved. [https://github.com/awslabs/gluon-ts/issues/501]. Here's the log file on CPU instance

2022-02-23T12:29:49.006-08:00   [2022-02-23 20:29:48.987 ip-10-0-79-45.us-east-2.compute.internal:17 WARNING hook.py:944] var is not NDArray or list or tuple of NDArrays, module_name:forkingseq2seqtrainingnetwork0_quantileloss0 Symbol
2022-02-23T12:29:49.006-08:00   [2022-02-23 20:29:48.987 ip-10-0-79-45.us-east-2.compute.internal:17 WARNING hook.py:944] var is not NDArray or list or tuple of NDArrays, module_name:forkingseq2seqtrainingnetwork0_quantileloss0 Symbol
2022-02-23T12:29:51.007-08:00   #015 0%| | 0/50 [00:00<?, ?it/s]INFO:gluonts.trainer:Number of parameters in ForkingSeq2SeqTrainingNetwork: 144250
2022-02-23T12:30:23.017-08:00   #015 30%|███ | 15/50 [00:10<00:24, 1.45it/s, epoch=1/50, avg_epoch_loss=4.67e+4]#015 60%|██████ | 30/50 [00:20<00:13, 1.45it/s, epoch=1/50, avg_epoch_loss=4.64e+4]#015 90%|█████████ | 45/50 [00:30<00:03, 1.46it/s, epoch=1/50, avg_epoch_loss=4.6e+4] #015100%|██████████| 50/50 [00:33<00:00, 1.47it/s, epoch=1/50, avg_epoch_loss=4.58e+4]
2022-02-23T12:30:23.017-08:00   INFO:gluonts.trainer:Epoch[0] Elapsed time 33.986 seconds
2022-02-23T12:30:23.017-08:00   INFO:gluonts.trainer:Epoch[0] Evaluation metric 'epoch_loss'=45840.236167
2022-02-23T12:30:23.017-08:00   INFO:gluonts.trainer:Epoch[1] Learning rate is 0.0002

and GPU instance:

2022-02-23T12:23:23.558-08:00   WARNING:gluonts.trainer:Batch [49] of Epoch[0] gave NaN loss and it will be ignored
2022-02-23T12:23:23.558-08:00   WARNING:gluonts.trainer:Batch [50] of Epoch[0] gave NaN loss and it will be ignored
2022-02-23T12:23:23.558-08:00   #015100%|██████████| 50/50 [00:04<00:00, 11.05it/s, epoch=1/50, avg_epoch_loss=nan]
2022-02-23T12:23:23.558-08:00   INFO:gluonts.trainer:Epoch[0] Elapsed time 4.526 seconds
2022-02-23T12:23:23.559-08:00   INFO:gluonts.trainer:Epoch[0] Evaluation metric 'epoch_loss'=nan
2022-02-23T12:23:23.559-08:00   /usr/local/lib/python3.6/site-packages/mxnet/gluon/block.py:1207: UserWarning: "gluonts.model.seq2seq._forking_network.ForkingSeq2SeqTrainingNetwork(cardinality=[1], context_length=730, decoder=gluonts.mx.block.decoder.ForkingMLPDecoder(dec_len=30, final_dim=30, hidden_dimension_sequence=[], prefix="decoder_"), distr_output=None, dtype=numpy.float32, embedding_dimension=[1], enc2dec=gluonts.mx.block.enc2dec.FutureFeatIntegratorEnc2Dec(), encoder=gluonts.mx.block.encoder.HierarchicalCausalConv1DEncoder(channels_seq=[30, 30, 30], dilation_seq=[1, 3, 9], kernel_size_seq=[7, 3, 3], prefix="encoder_", use_dynamic_feat=True, use_residual=True, use_static_feat=True), num_forking=730, quantile_output=gluonts.mx.block.quantile_output.QuantileOutput(quantile_weights=None, quantiles=[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]), scaling=False, scaling_decoder_dynamic_feature=False)" is being hybridized while still having forward hook/pre-hook. If "gluonts.model.seq2seq._forking_network.ForkingSeq2SeqTrainingNetwork(cardinality=[1], context_length=730, decoder=gluonts.mx.block.decoder.ForkingMLPDecoder(dec_len=30, final_dim=30, hidden_dimension_sequence=[], prefix="decoder_"), distr_output=None, dtype=numpy.float32, embedding_dimension=[1], enc2dec=gluonts.mx.block.enc2dec.FutureFeatIntegratorEnc2Dec(), encoder=gluonts.mx.block.encoder.HierarchicalCausalConv1DEncoder(channels_seq=[30, 30, 30], dilation_seq=[1, 3, 9], kernel_size_seq=[7, 3, 3], prefix="encoder_", use_dynamic_feat=True, use_residual=True, use_static_feat=True), num_forking=730, quantile_output=gluonts.mx.block.quantile_output.QuantileOutput(quantile_weights=None, quantiles=[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]), scaling=False, scaling_decoder_dynamic_feature=False)" is a child of HybridBlock, the hooks will not take effect. .format(block=self))
2022-02-23T12:23:23.559-08:00   Traceback (most recent call last): File "train2.py", line 273, in <module> train(args.bucket, args.seq, args.algo, args.freq, args.prediction_length, args.context_length, args.epochs, args.learning_rate, args.dropout_rate, args.hybridize, args.num_batches_per_epoch, args.channels_seq, args.kernel_size_seq, args.dilation_seq, args.decoder_mlp_dim_seq) File "train2.py", line 159, in train predictor = estimator.train(training_data=training_data) File "/usr/local/lib/python3.6/site-packages/gluonts/mx/model/estimator.py", line 201, in train cache_data=cache_data, File "/usr/local/lib/python3.6/site-packages/gluonts/mx/model/estimator.py", line 173, in train_model validation_iter=validation_data_loader, File "/usr/local/lib/python3.6/site-packages/gluonts/mx/trainer/_base.py", line 490, in __call__ ctx=self.ctx, File "/usr/local/lib/python3.6/site-packages/gluonts/mx/trainer/callback.py", line 283, in on_epoch_end return all(self._exec("on_epoch_end", *args, **kwargs)) File "/usr/local/lib/python3.6/site-packages/gluonts/mx/trainer/callback.py", line 255, in _exec for callback in self.callbacks File "/usr/local/lib/python3.6/site-packages/gluonts/mx/trainer/callback.py", line 255, in <listcomp> for callback in self.callbacks File "/usr/local/lib/python3.6/site-packages/gluonts/mx/trainer/learning_rate_scheduler.py", line 233, in on_epoch_end "Got NaN in first epoch. Try reducing initial learning rate."
2022-02-23T12:23:23.559-08:00   gluonts.core.exception.GluonTSUserError: Got NaN in first epoch. Try reducing initial learning rate.
2022-02-23T12:23:23.559-08:00   2022-02-23 20:23:23,398 sagemaker-training-toolkit ERROR ExecuteUserScriptError:

The parameters of the two training job is exactly the same except ctx = 'cpu' and ctx = 'gpu'.

My code is based on this AWS blog and only change a few parameter settings. https://aws.amazon.com/blogs/machine-learning/training-debugging-and-running-time-series-forecasting-models-with-the-gluonts-toolkit-on-amazon-sagemaker/

Please help, thank you!

lostella commented 2 years ago

Also similar to #950

@clianga do you know what versions of gluonts, mxnet and CUDA you are running?

clianga commented 2 years ago

Hi @lostella , thank you for the instant feedback. I just checked the versions by adding print(gluonts.version) and print(mxnet.version) in the py file and check logs through clockwatch, both showed version is 0.8.1. For CUDA, I'm not familiar with GPU settings, could you tell me how could I print the version?

lostella commented 2 years ago

@clianga the name of the MXNet package that you have installed should tell it: for example, mxnet-cu92mkl means that the package has CUDA 9.2 with MKL-DNN enabled

clianga commented 2 years ago

Hi @lostella , I checked the py file used to run this job. It seems I don't have a pip install for MXNet. Maybe the sagemaker instance itself have MXNet installed? My py-file code contains MXNet are:

import os
os.system('pip install pandas')
os.system('pip install gluonts')
import pandas as pd
import pathlib
import gluonts
import numpy as np
import argparse
import json
import boto3
from mxnet.context import gpu, cpu
from mxnet.context import num_gpus, gpu, cpu
from gluonts.dataset.util import to_pandas
from gluonts.model.deepar import DeepAREstimator
from gluonts.model.simple_feedforward import SimpleFeedForwardEstimator
from gluonts.model.lstnet import LSTNetEstimator
from gluonts.model.seq2seq import MQCNNEstimator
from gluonts.model.transformer import TransformerEstimator
from gluonts.evaluation.backtest import make_evaluation_predictions, backtest_metrics
from gluonts.evaluation import Evaluator
from gluonts.model.predictor import Predictor
from gluonts.dataset.common import ListDataset
from gluonts.dataset.field_names import FieldName
from gluonts.mx.trainer import Trainer
from gluonts.dataset.multivariate_grouper import MultivariateGrouper
from smdebug.mxnet import Hook

s3 = boto3.client("s3")

def uploadDirectory(model_dir,prefix,bucket):
    for root,dirs,files in os.walk(model_dir):
        for file in files:
            print(os.path.join(root,file))
            print(prefix+file)
            s3.upload_file(os.path.join(root,file),bucket,prefix+file)

In the Sagemaker studio, I run

import sagemaker
from sagemaker.mxnet import MXNet

mxnet_estimator = MXNet(entry_point='blog_train_algos.py',
                        role=sagemaker.get_execution_role(),
                        instance_type='ml.p3.2xlarge',
                        instance_count=1,
                        framework_version='1.7.0', 
                        py_version='py3',
                        hyperparameters={'bucket': bucket,
                            'seq': trial.trial_name,
                            'algo': "seq2seq",             
                            'freq': "D", 
                            'prediction_length': 30, 
                            'epochs': 10,
                            'learning_rate': 1e-3,
                            'hybridize': False,
                            'num_batches_per_epoch': 10,
                         })