Open fernandocamargoai opened 4 years ago
maybe lower num_batches_per_epoch? I guess learning_rate is already low but the num_batches_per_epoch is high, so that during the first epoch training, arguments of distribution had numerical problems in calculating log prob.
@samosun, my logic in increasing the num_batches_per_epoch is that I think each time-series should have a more or less equal chance of having a slice of it being part of each epoch. I mean, if I have a batch size of 32, 128976 / 32 = 4030,5. I'm actually using 4000 of num_batches_per_epoch right now (that number was because I had 300k+ time-series before and now I'm filtering some of them). Doesn't it make sense? Because I imagine if I was using a small num_batches_per_epoch, a given time-series could appear once in the first epoch and appear again several epochs later, which doesn't make sense to me.
And I don't understand it would create numerical problems. Isn't the loss computed per batch, anyway? In my understanding, the gradient descendent doesn't even "know" about the concept of epochs. Only we know because it's when we take the avg_loss and is a possible point of stopping the training.
In my understanding, the gradient descendent doesn't even "know" about the concept of epochs.
yes, I think you are right. num_batches_per_epoch is not the root cause. I thought if arguments mu,alpha in NegativeBinomial was projected into "boundary values" like 0, it will be failed in calculating log prob, but I noticed that they had solved this problem by adding epsilon, see https://github.com/awslabs/gluon-ts/blob/master/src/gluonts/distribution/neg_binomial.py#L114
so there may be other mysteries to be found if it's reproducible
import mxnet as mx
from gluonts.distribution import NegativeBinomialOutput
distr_output = NegativeBinomialOutput()
distr = distr_output.distribution(
(mx.nd.array([0]), # if mu=0
mx.nd.array([1]))
)
x = mx.nd.array([0])
ll = distr.log_prob(x)
print(ll) # output nan
@samosun, I think I can confirm that it only happens with hybridize=True. I left it training with hybridize=False and it's in the 22th epoch now without NaN. Also, the model converges much faster with hybridize=False (0.147 at first epoch vs 0.28 at 4th epoch and then the NaN happened. There shouldn't be any difference besides speed with hybridize, but it seems that's something else happening.
@fernandocamargoti are you using mxnet 1.6.0? And the DeepAREstimator
?
It’s very weird indeed that this shows up with hybridize=True
only. One place where things could go wrong is this https://github.com/awslabs/gluon-ts/blob/0d963f7dc55ef866d86e33633a28d57dfab33adb/src/gluonts/distribution/neg_binomial.py#L114 but still this should happen with and without hybridization, unless some subtle bug in mxnet is there
TARGET: Count values, with a lot of zeros
What’s the fraction of zero values, roughly? It would be nice if we could to reproduce the issue
@lostella, yes, I'm using DeepAREstimator with mxnet 1.6.0. I'll get the fraction and get back to you today.
Look at how faster the training with hybridize=False converges:
100%|###################################################################################################################################################################| 4000/4000 [58:16<00:00, 1.14it/s, epoch=1/4, avg_epoch_loss=0.219]
2020-05-19 21:53:41,841 : INFO : Epoch[0] Elapsed time 3496.287 seconds
2020-05-19 21:53:41,842 : INFO : Epoch[0] Evaluation metric 'epoch_loss'=0.218906
2020-05-19 21:53:41,853 : INFO : Epoch[1] Learning rate is 0.0001
100%|###################################################################################################################################################################| 4000/4000 [59:31<00:00, 1.12it/s, epoch=2/4, avg_epoch_loss=0.164]
2020-05-19 22:53:13,262 : INFO : Epoch[1] Elapsed time 3571.409 seconds
2020-05-19 22:53:13,262 : INFO : Epoch[1] Evaluation metric 'epoch_loss'=0.164086
2020-05-19 22:53:13,273 : INFO : Epoch[2] Learning rate is 0.0001
100%|###################################################################################################################################################################| 4000/4000 [59:35<00:00, 1.12it/s, epoch=3/4, avg_epoch_loss=0.155]
2020-05-19 23:52:48,752 : INFO : Epoch[2] Elapsed time 3575.479 seconds
2020-05-19 23:52:48,752 : INFO : Epoch[2] Evaluation metric 'epoch_loss'=0.154580
2020-05-19 23:52:48,761 : INFO : Epoch[3] Learning rate is 0.0001
100%|###################################################################################################################################################################| 4000/4000 [59:29<00:00, 1.12it/s, epoch=4/4, avg_epoch_loss=0.151]
2020-05-20 00:52:18,165 : INFO : Epoch[3] Elapsed time 3569.404 seconds
2020-05-20 00:52:18,166 : INFO : Epoch[3] Evaluation metric 'epoch_loss'=0.150852
2020-05-20 00:52:18,180 : INFO : Loading parameters from best epoch (3)
2020-05-20 00:52:18,189 : INFO : Final loss: 0.15085175694245845 (occurred at epoch 3)
Compared to the same training (every single hyperparameter is equal and I have seed in the random, numpy.random and mxnet) with hybridize=True:
100%|██████████| 4000/4000 [22:33<00:00, 2.95it/s, epoch=1/100, avg_epoch_loss=0.348]
2020-05-19 18:29:27,221 : INFO : Epoch[0] Elapsed time 1353.823 seconds
2020-05-19 18:29:27,223 : INFO : Epoch[0] Evaluation metric 'epoch_loss'=0.347671
2020-05-19 18:29:27,245 : INFO : Epoch[1] Learning rate is 0.0001
100%|██████████| 4000/4000 [34:43<00:00, 1.92it/s, epoch=2/100, avg_epoch_loss=0.289]
2020-05-19 19:04:10,699 : INFO : Epoch[1] Elapsed time 2083.454 seconds
2020-05-19 19:04:10,699 : INFO : Epoch[1] Evaluation metric 'epoch_loss'=0.289498
2020-05-19 19:04:10,706 : INFO : Epoch[2] Learning rate is 0.0001
100%|██████████| 4000/4000 [34:49<00:00, 1.91it/s, epoch=3/100, avg_epoch_loss=0.281]
2020-05-19 19:39:00,636 : INFO : Epoch[2] Elapsed time 2089.929 seconds
2020-05-19 19:39:00,636 : INFO : Epoch[2] Evaluation metric 'epoch_loss'=0.280888
2020-05-19 19:39:00,643 : INFO : Epoch[3] Learning rate is 0.0001
100%|██████████| 4000/4000 [44:04<00:00, 1.51it/s, epoch=4/100, avg_epoch_loss=0.275]
2020-05-19 20:23:05,000 : INFO : Epoch[3] Elapsed time 2644.357 seconds
2020-05-19 20:23:05,000 : INFO : Epoch[3] Evaluation metric 'epoch_loss'=0.274745
2020-05-19 20:23:05,007 : INFO : Epoch[4] Learning rate is 0.0001
100%|██████████| 4000/4000 [45:46<00:00, 1.46it/s, epoch=5/100, avg_epoch_loss=0.273]
2020-05-19 21:08:51,684 : INFO : Epoch[4] Elapsed time 2746.677 seconds
2020-05-19 21:08:51,684 : INFO : Epoch[4] Evaluation metric 'epoch_loss'=0.273488
2020-05-19 21:08:51,690 : INFO : Epoch[5] Learning rate is 0.0001
100%|██████████| 4000/4000 [45:50<00:00, 1.45it/s, epoch=6/100, avg_epoch_loss=0.267]
2020-05-19 21:54:41,773 : INFO : Epoch[5] Elapsed time 2750.083 seconds
2020-05-19 21:54:41,773 : INFO : Epoch[5] Evaluation metric 'epoch_loss'=0.266905
2020-05-19 21:54:41,779 : INFO : Epoch[6] Learning rate is 0.0001
100%|██████████| 4000/4000 [46:00<00:00, 1.45it/s, epoch=7/100, avg_epoch_loss=0.269]
2020-05-19 22:40:42,483 : INFO : Epoch[6] Elapsed time 2760.703 seconds
2020-05-19 22:40:42,483 : INFO : Epoch[6] Evaluation metric 'epoch_loss'=0.268658
2020-05-19 22:40:42,483 : INFO : Epoch[7] Learning rate is 0.0001
100%|██████████| 4000/4000 [45:41<00:00, 1.46it/s, epoch=8/100, avg_epoch_loss=0.263]
2020-05-19 23:26:24,179 : INFO : Epoch[7] Elapsed time 2741.696 seconds
2020-05-19 23:26:24,179 : INFO : Epoch[7] Evaluation metric 'epoch_loss'=0.263290
2020-05-19 23:26:24,185 : INFO : Epoch[8] Learning rate is 0.0001
100%|██████████| 4000/4000 [45:52<00:00, 1.45it/s, epoch=9/100, avg_epoch_loss=0.266]
2020-05-20 00:12:16,630 : INFO : Epoch[8] Elapsed time 2752.445 seconds
2020-05-20 00:12:16,630 : INFO : Epoch[8] Evaluation metric 'epoch_loss'=0.265814
2020-05-20 00:12:16,630 : INFO : Epoch[9] Learning rate is 0.0001
100%|██████████| 4000/4000 [46:12<00:00, 1.44it/s, epoch=10/100, avg_epoch_loss=0.262]
2020-05-20 00:58:29,082 : INFO : Epoch[9] Elapsed time 2772.452 seconds
2020-05-20 00:58:29,082 : INFO : Epoch[9] Evaluation metric 'epoch_loss'=0.262476
2020-05-20 00:58:29,088 : INFO : Epoch[10] Learning rate is 0.0001
100%|██████████| 4000/4000 [45:49<00:00, 1.45it/s, epoch=11/100, avg_epoch_loss=0.263]
2020-05-20 01:44:18,254 : INFO : Epoch[10] Elapsed time 2749.166 seconds
2020-05-20 01:44:18,254 : INFO : Epoch[10] Evaluation metric 'epoch_loss'=0.262678
2020-05-20 01:44:18,254 : INFO : Epoch[11] Learning rate is 0.0001
100%|██████████| 4000/4000 [45:42<00:00, 1.46it/s, epoch=12/100, avg_epoch_loss=0.264]
2020-05-20 02:30:00,600 : INFO : Epoch[11] Elapsed time 2742.346 seconds
2020-05-20 02:30:00,600 : INFO : Epoch[11] Evaluation metric 'epoch_loss'=0.264322
2020-05-20 02:30:00,600 : INFO : Epoch[12] Learning rate is 0.0001
100%|██████████| 4000/4000 [46:06<00:00, 1.45it/s, epoch=13/100, avg_epoch_loss=0.261]
2020-05-20 03:16:06,984 : INFO : Epoch[12] Elapsed time 2766.384 seconds
2020-05-20 03:16:06,984 : INFO : Epoch[12] Evaluation metric 'epoch_loss'=0.261147
2020-05-20 03:16:06,990 : INFO : Epoch[13] Learning rate is 0.0001
100%|██████████| 4000/4000 [45:48<00:00, 1.46it/s, epoch=14/100, avg_epoch_loss=0.267]
2020-05-20 04:01:55,199 : INFO : Epoch[13] Elapsed time 2748.209 seconds
2020-05-20 04:01:55,200 : INFO : Epoch[13] Evaluation metric 'epoch_loss'=0.266566
2020-05-20 04:01:55,200 : INFO : Epoch[14] Learning rate is 0.0001
100%|██████████| 4000/4000 [45:56<00:00, 1.45it/s, epoch=15/100, avg_epoch_loss=0.262]
2020-05-20 04:47:51,212 : INFO : Epoch[14] Elapsed time 2756.012 seconds
2020-05-20 04:47:51,212 : INFO : Epoch[14] Evaluation metric 'epoch_loss'=0.261856
2020-05-20 04:47:51,212 : INFO : Epoch[15] Learning rate is 0.0001
100%|██████████| 4000/4000 [46:09<00:00, 1.44it/s, epoch=16/100, avg_epoch_loss=0.26]
2020-05-20 05:34:00,381 : INFO : Epoch[15] Elapsed time 2769.169 seconds
2020-05-20 05:34:00,381 : INFO : Epoch[15] Evaluation metric 'epoch_loss'=0.260374
2020-05-20 05:34:00,387 : INFO : Epoch[16] Learning rate is 0.0001
100%|██████████| 4000/4000 [46:02<00:00, 1.45it/s, epoch=17/100, avg_epoch_loss=0.262]
2020-05-20 06:20:02,540 : INFO : Epoch[16] Elapsed time 2762.153 seconds
2020-05-20 06:20:02,540 : INFO : Epoch[16] Evaluation metric 'epoch_loss'=0.262312
2020-05-20 06:20:02,540 : INFO : Epoch[17] Learning rate is 0.0001
100%|██████████| 4000/4000 [46:05<00:00, 1.45it/s, epoch=18/100, avg_epoch_loss=0.26]
2020-05-20 07:06:08,361 : INFO : Epoch[17] Elapsed time 2765.821 seconds
2020-05-20 07:06:08,361 : INFO : Epoch[17] Evaluation metric 'epoch_loss'=0.260257
2020-05-20 07:06:08,369 : INFO : Epoch[18] Learning rate is 0.0001
0%| | 13/4000 [00:10<54:16, 1.22it/s, epoch=19/100, avg_epoch_loss=0.269]2020-05-20 07:06:24,829 : WARNING : Epoch[18] gave nan loss
0%| | 13/4000 [00:16<1:24:07, 1.27s/it, epoch=19/100, avg_epoch_loss=0.271]
2020-05-20 07:06:24,829 : INFO : Loading parameters from best epoch (17)
2020-05-20 07:06:24,841 : INFO : Epoch[19] Learning rate is 5e-05
0%| | 0/4000 [00:00<?, ?it/s]2020-05-20 07:06:26,563 : WARNING : Epoch[19] gave nan loss
0%| | 0/4000 [00:01<?, ?it/s, epoch=20/100, avg_epoch_loss=0.232]
2020-05-20 07:06:26,564 : INFO : Stopping training
2020-05-20 07:06:26,564 : INFO : Loading parameters from best epoch (17)
2020-05-20 07:06:26,570 : INFO : Final loss: 0.2602571310577914 (occurred at epoch 17)
2020-05-20 07:06:26,576 : INFO : End model training
Hello, @lostella. Sorry for taking this long.
I have 132131585 out of 143037878 (92.37%) of zeros.
To give you more information, this dataset is from a retailer with many branches and many products. I have all the sales registered for each item in each branch. Some of these items might be discontinued (like smartphones) after some time. So, each time-series may start on a different date, but I've normalized all of them to finish on the same date. So, despite the fact that it's really normal for time-series to have must of its values equal to zero, there's also those that will never sell anything again, that contribute to the zeroes.
Did you try switching back to mxnet-cu101mkl==1.4.1 - I think with that mxnet version you can leave out the hybridize False option.
I just stumbled on this on my quest to get DeepAR converging somewhere in the M5 Kaggle challenge ... maybe it helps you @fernandocamargoti ?
In case this works for you too something between mxnet 1.4.1 and 1.6 broke ....
So, each time-series may start on a different date, but I've normalized all of them to finish on the same date.
@fernandocamargoti as a side note: that that's not strictly needed, and may bias your predictions towards zero. Are you somehow indicating which values you have manually padded by adding some binary feature that indicates it?
@fernandocamargoti also a minor note: the logs you pasted for hybridized=True suggest that you're using num_batches_per_epoch=4000, but you mentioned 10_000 (still, it seems to converge too slow even assuming that you used 10_000 for hybridized=False).
@matthiasanderer, I'm running on IBM Power 9. If I'm not wrong, MKL is for Intel Processors.
@lostella, thanks for your side note. But I also my predictions to indicate whether a time series will "die" (the product will lose all its demand). I haven't set up any binary feature to indicate it. If I had, how would I fill this feature in production? I mean, a product time series start dying, and at some point, no more sales are made. It would be hard to decide when to put set this flag to True in production.
@fernandocamargoti also a minor note: the logs you pasted for hybridized=True suggest that you're using num_batches_per_epoch=4000, but you mentioned 10_000 (still, it seems to converge too slow even assuming that you used 10_000 for hybridized=False).
Yeah, it's another version where I filter time series with a minimum number of days with sales. Previously, I filtered with 10 and I increase to 30 to train faster. Then, I've updated the num_batches_per_epoch accordingly. Notice, though, that both logs have 4000.
Oh I see, I hadn’t scrolled all the way to the right in the first log, sorry. I also agree on the zero padding you’re doing, since that’s a behavior you want to model 👍🏻
@matthiasanderer, I'm running on IBM Power 9. If I'm not wrong, MKL is for Intel Processors.
Sorry that was my copy paste of course only version 1.4.1 is the relevant part ... I just used that cuda10.1/mkl version...
@lostella, I think I've got a way for you guys to reproduce the problem. The same thing happens on m5 dataset (https://www.kaggle.com/allunia/m5-uncertainty).
To make it easier, I've already preprocessed the dataset, putting in the GluonTS format. It's available here: https://github.com/fernandocamargoti/m5_competition_gluonts_dataset
The cardinality is [5, 12, 5, 9, 3051]. It always has +2 of the number of the categories because I leave the first index for unknown and the second for None (there are some aggregated time series where some categorical variables aren't defined).
Also, I've been using my own implementation of Dataset instead of FileDataset from GluonTS. The reason is that the FileDataset was reading all the files to calculate the len() (which is called at the start of the training). It was taking a very long time. So, I wanted to start training and let the file be loaded only when necessary. To calculate the len(), I simply count the number of files, without reading them. So, here is my implementation: https://gist.github.com/fernandocamargoti/0f9c0b390ec44e0239a835cff91ae85c
To separate the train and test set, I use:
self._train_dataset = SameSizeTransformedDataset(
self.dataset, transformations=[FilterTimeSeriesTransformation(start=0, end=split)])
self._test_dataset = SameSizeTransformedDataset(
self.dataset, transformations=[FilterTimeSeriesTransformation(start=0, end=split + self.test_steps)])
The SameSizeTransformedDataset is also provided in the gist above. It's only used to avoid reading the whole dataset to calculate the len(), allowing me to lazy load it during the training.
As suggested by @matthiasanderer, I've tried MXNet 1.4.1 and this weird bug didn't happen. I'll also try 1.5.1 and report back.
The bug doesn't happen with MXNet 1.5.1. So, it's something that happened when 1.6.0 was introduced.
@fernandocamargoti thanks for looking into it, this is really cool (but also not cool that the bug is there).
Did you observe that with 1.4 and 1.5 both NaN didn’t show up and convergence speed was unaffected by hybridize True/False?
@lostella, both converge much faster and the NaN doesn't happen at all. But I've noticed that a weird bug happens with 1.5:
Traceback (most recent call last):
File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/luigi/worker.py", line 199, in run
new_deps = self._run_get_new_deps()
File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/luigi/worker.py", line 141, in _run_get_new_deps
task_gen = self.task.run()
File "/home/fernandocamargo/datascience_workspace/log_inteligente_ai/gazin/task/model/gluonts.py", line 685, in run
test_forecast = cast(SampleForecast, next(iter(forecast_it)))
File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/gluonts/model/predictor.py", line 330, in predict
num_samples=num_samples,
File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/gluonts/model/forecast_generator.py", line 197, in __call__
outputs = prediction_net(*inputs).asnumpy()
File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/mxnet/gluon/block.py", line 548, in __call__
out = self.forward(*args)
File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/mxnet/gluon/block.py", line 925, in forward
return self.hybrid_forward(ndarray, x, *args, **params)
File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/gluonts/model/deepar/_network.py", line 608, in hybrid_forward
begin_states=state,
File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/gluonts/model/deepar/_network.py", line 541, in sampling_decoder
new_samples = distr.sample(dtype=self.dtype)
File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/gluonts/distribution/neg_binomial.py", line 98, in sample
s, mu=self.mu, alpha=self.alpha, num_samples=num_samples
File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/gluonts/distribution/distribution.py", line 339, in _sample_multiple
samples = sample_func(*args_expanded, **kwargs_expanded)
File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/gluonts/distribution/neg_binomial.py", line 94, in s
x = F.minimum(F.random.gamma(r, theta), 1e6)
File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/mxnet/ndarray/random.py", line 383, in gamma
[alpha, beta], shape, dtype, ctx, out, kwargs)
File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/mxnet/ndarray/random.py", line 38, in _random_helper
return sampler(*params, shape=shape, dtype=dtype, out=out, **kwargs)
File "<string>", line 68, in _sample_gamma
File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/mxnet/_ctypes/ndarray.py", line 92, in _imperative_invoke
ctypes.byref(out_stypes)))
File "/home/fernandocamargo/anaconda3/envs/gazin/lib/python3.6/site-packages/mxnet/base.py", line 253, in check_call
raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: vector::_M_range_insert
I happens both when I try to run the evaluation and if I load a model I try to use it. So, I'm sticking with 1.4.1 for now.
@fernandocamargoti the issue you’re facing with mxnet 1.5 is a known one on Linux: https://github.com/apache/incubator-mxnet/issues/16135
This was fixed in 1.6 but the fix was not backported to 1.5 I believe. This is also why we skipped support for 1.5 in GluonTS, and directly jumped to 1.6
@fernandocamargoti did you mention you were running on GPU? In that case this may be related: https://github.com/awslabs/gluon-ts/pull/728
@lostella, it could definitely be related. But https://github.com/awslabs/gluon-ts/pull/728 only fixes for binned distribution, right? I guess the same problem happens with other distributions, like Negative Binomial that I'm using.
I'm wondering whether this still shows up with MXNet 1.7? (since that fixed the bug concerning #728) @fernandocamargoti not sure you're able to verify this, otherwise we'll try to reproduce with your instructions above
It still happens, @lostella. I've just updated both the MXNet to 1.7 and GluonTS to 0.6.4 and I'm getting NaN loss quite early in the training (like 6 epochs later):
100%|����������������������������������������������������������������������������������������������������������������������������������������������������������������| 18030/18033 [30:26<00:00, 9.98it/s, epoch=12/30, avg_epoch_loss=nan]2021-01-12 18:47:44,035 : WARNING : Batch [18031] of Epoch[11] gave NaN loss and it will be ignored
100%|����������������������������������������������������������������������������������������������������������������������������������������������������������������| 18031/18033 [30:26<00:00, 9.89it/s, epoch=12/30, avg_epoch_loss=nan]2021-01-12 18:47:44,134 : WARNING : Batch [18032] of Epoch[11] gave NaN loss and it will be ignored
2021-01-12 18:47:44,231 : WARNING : Batch [18033] of Epoch[11] gave NaN loss and it will be ignored
100%|����������������������������������������������������������������������������������������������������������������������������������������������������������������| 18033/18033 [30:26<00:00, 9.87it/s, epoch=12/30, avg_epoch_loss=nan]
2021-01-12 18:47:44,232 : INFO : Epoch[11] Elapsed time 1826.756 seconds
2021-01-12 18:47:44,232 : INFO : Epoch[11] Evaluation metric 'epoch_loss'=nan
2021-01-12 18:47:44,233 : INFO : Stopping training
2021-01-12 18:47:44,233 : INFO : Computing averaged parameters.
2021-01-12 18:47:44,237 : INFO : Loading averaged parameters.
2021-01-12 18:47:44,249 : INFO : End model training
I'm using the DeepAR as follows:
Trainer(
ctx=self.ctx,
epochs=30,
batch_size=32,
num_batches_per_epoch=len(self.train_dataset) // self.batch_size,
learning_rate=0.001,
learning_rate_decay_factor=0.5,
patience=3,
minimum_learning_rate=5e-05,
clip_gradient=1.0,
weight_decay=1e-08,
hybridize=True,
)
DeepAREstimator(
freq="D",
prediction_length=90,
trainer=self.get_trainer(),
context_length=90,
num_layers=2,
num_cells=200,
cell_type="lstm",
dropout_rate=0.1,
use_feat_dynamic_real=True,
use_feat_static_cat=True,
cardinality=self.get_cardinality(),
embedding_dimension=[
2,
7,
10,
20,
5,
20,
30,
30,
30,
50,
3,
30
],
distr_output=NegativeBinomialOutput(),
scaling=True,
lags_seq=None
)
It's trained for demand forecasting. I also have a variant that is trained to forecast weekly instead of daily. Both of them have the same error.
Hi,
I am having the similar problem as yours. Each row is a store and each column is a time index. Each row records the sales data of that store ID. Inside the dataset, we are having NAN and 0 (as shown in the first picture) but we are not sure that is the reason caused the problem as highlighted (as shown in the second picture). Or is it because we are using negative binomial and it is not able to handle the NANs.
Thank you in advance for any help!!!
Any news about this, @lostella? It's been a long time that I'm affected by this bug.
@fernandocamargoti we still have to reproduce the issue on master
. I’m trying with the m5 data that you provided, but for debugging purposes it would be nice to have a smaller (faster) way to do that. Will update in case I can reproduce it.
Unsure if this is meaningful but I got the exact same error when training DeepAR. Once I removed feat_dynamic_real when I initialize the DeepAREstimator, it works just fine. I tried normalizing my dynamic features to 0-1 scale but it did not help.
Interesting, @GabrielDeza. My dynamic features are normalized as well. They're too important for my prediction for me to simply remove them, though. I also noticed that the resulting model that was trained after the NaNs, even though the logs say that the batch will be ignored, also result in NaNs in production. I didn't want to downgrade everything again after going through the hassle of upgrading it. Especially because MXNet 1.4.1 is very old now and requires an old version of numpy. But it seems that I, unfortunately, have no choice. It's very frustrating.
Hello, @lostella. I have an update about this issue. I've used pytorch-ts, which is a library on top of GluonTS that reimplements DeepAR and other models in PyTorch and it works nicely.
It might be good to give it a try as well, @GabrielDeza.
@fernandocamargoti thanks for the update!
The first thing that comes to mind is that pytorch-ts uses directly the distribution classes provided by PyTorch: negative binomial there uses a slightly different parametrization, which may be responsible for better numerical stability -- we should look into this.
It also looks like MXNet's negative binomial uses the same parametrization. The best thing would be to rely directly on MXNet's distributions of course, as soon as they're out (v2.0 I believe).
For reference:
Well, I've just got a NaN using the PyTorch implementation. When trying to use the model, I've got the following error:
File "/home/fernandocamargo/anaconda3/envs/logit/lib/python3.7/site-packages/gluonts/model/forecast_generator.py", line 174, in __call__
outputs = predict_to_numpy(prediction_net, inputs)
File "/home/fernandocamargo/anaconda3/envs/logit/lib/python3.7/functools.py", line 840, in wrapper
return dispatch(args[0].__class__)(*args, **kw)
File "/home/fernandocamargo/anaconda3/envs/logit/lib/python3.7/site-packages/gluonts/torch/model/predictor.py", line 38, in _
return prediction_net(*inputs).cpu().numpy()
File "/home/fernandocamargo/anaconda3/envs/logit/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/fernandocamargo/anaconda3/envs/logit/lib/python3.7/site-packages/pts/model/deepar/deepar_network.py", line 456, in forward
begin_states=state,
File "/home/fernandocamargo/anaconda3/envs/logit/lib/python3.7/site-packages/pts/model/deepar/deepar_network.py", line 395, in sampling_decoder
new_samples = distr.sample()
File "/home/fernandocamargo/anaconda3/envs/logit/lib/python3.7/site-packages/torch/distributions/negative_binomial.py", line 86, in sample
return torch.poisson(rate)
RuntimeError: invalid Poisson rate, expected rate to be non-negative
I know that it's from a different implementation, but it might be useful somehow. I've debugged the trained model and the decoder_input
contains NaN values. After looking further, I've seen that static_feat
contains mostly NaN values. So, I debugger the self.unroll_encoder()
and surprisingly, it was the embedded that was returning NaNs. Looking into the weights of the nn.Embedding
s, there was indeed some NaN values there.
I'll try to train again with torch.auto_grad.detect_anomaly() to try to discover what causes it.
Well, all I got was:
Traceback (most recent call last):
File "/opt/anaconda3/envs/logit/lib/python3.7/site-packages/luigi/worker.py", line 191, in run
new_deps = self._run_get_new_deps()
File "/opt/anaconda3/envs/logit/lib/python3.7/site-packages/luigi/worker.py", line 133, in _run_get_new_deps
task_gen = self.task.run()
File "/root/work/log_inteligente_ai/logit/task/model/base.py", line 167, in run
dict(fit=self.fit, cross_validation=self.cross_validate)[self.training_type]()
File "/root/work/log_inteligente_ai/logit/task/model/gluonts.py", line 428, in fit
prefetch_factor=self.num_prefetch,
File "/opt/anaconda3/envs/logit/lib/python3.7/site-packages/pts/model/estimator.py", line 145, in train_model
validation_iter=validation_data_loader,
File "/opt/anaconda3/envs/logit/lib/python3.7/site-packages/pts/trainer.py", line 86, in __call__
loss.backward()
File "/opt/anaconda3/envs/logit/lib/python3.7/site-packages/torch/tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/opt/anaconda3/envs/logit/lib/python3.7/site-packages/torch/autograd/__init__.py", line 132, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Function 'CudnnRnnBackward' returned nan values in its 0th output.
I don't know if it might be useful or not, but my loss starts increasing before the error:
18095it [31:13, 9.66it/s, avg_epoch_loss=0.0714, epoch=1]
18095it [30:52, 9.77it/s, avg_epoch_loss=0.0712, epoch=2]
18095it [30:51, 9.77it/s, avg_epoch_loss=0.0717, epoch=3]
18095it [30:43, 9.81it/s, avg_epoch_loss=0.0713, epoch=4]
18095it [31:06, 9.70it/s, avg_epoch_loss=0.075, epoch=5]
18095it [30:47, 9.79it/s, avg_epoch_loss=0.0762, epoch=6]
18095it [30:48, 9.79it/s, avg_epoch_loss=0.0848, epoch=7]
18095it [31:16, 9.64it/s, avg_epoch_loss=0.115, epoch=8]
18095it [30:58, 9.74it/s, avg_epoch_loss=0.262, epoch=9]
18095it [31:07, 9.69it/s, avg_epoch_loss=0.183, epoch=10]
Actually, the loss might be rising because the pytorch-ts uses OneCycleLR. But it seems that the problem is simply exploding gradients. I'm trying to reduce the gradient clipping value and I'll also try to reduce the LR.
Reducing the clip_gradient
value solved the issue for me with pytorch-ts. It was probably happening because of the increase of the LR with OneCycleLR and how fragile a LSTM is to exploding gradients. I don't know if it would be the same problem for the MXNet implementation of gluonts, though, since it doesn't use OneCycleLR and it didn't happen with a previous MXNet version.
Same problem with MXNet implementation. I’ve reduced the learning rate and sometimes it help sometimes not. Maybe I’m not reducing it enough? Is there a way for me to know what clip_gradient to choose or is this a trial and error process?
For reference, this issue also occurs on master
with the Car Parts
, COVID Deaths
and Hospital
datasets (the latter two are rather small) from the Monash Forecasting Repository.
It might be quicker to investigate this issue with the SimpleFeedforwardEstimator
though -- this seems to be the same issue and also appears with the taxi_30min
and wiki-rolling_nips
datasets.
I also encountered NaN loss when using DeepVAREstimator (mxnet version) on this toy dataset. This only happens when setting both hybridize=False and cell_type="lstm". When I try hybridize=True or change cell_type, the issue does not happen
rng = pd.date_range('2015-02-24', periods=30, freq='T')
df = pd.DataFrame({
'Date': rng,
'Val': np.random.randn(len(rng)),
'Val2': np.random.randn(len(rng)),
})
Hope this help!
@fernandocamargoai Did you manage to find a fix for this issue ? I have the same problem when I try to use the NegativeBinomialOutput even with a strong gradient clipping.
Thanks,
@fernandocamargoai Did you manage to find a fix for this issue ? I have the same problem when I try to use the NegativeBinomialOutput even with a strong gradient clipping.
Thanks,
Well, I've been using the PyTorch implementation. It still happens from time to time, depending on the hyperparameters. What seems to happen is that the parameters of the distribution might fall outside of the valid values depending on the hyperparameters. I noticed that some distributions are even more sensitive. But at least the PyTorch implementation seems better overall.
Closed by mistake
@fernandocamargoai I suspect that #1796 and this issue have a common root cause, which may have been fixed in #1893. If either of the issues gets closed because of the fix, maybe the other one can get closed too
I am getting same exception what is the reason here? What is the meaning of this exception?
Description
I have a dataset with 129k time series of demand for products within some branches of a retailer. I'm having trouble with NaN loss ever since I started using GluonTS. At first, I discovered some NaN and Infinity values within my FEAT_DYNAMIC_REAL. But even after solving those issues, I'm still getting NaN loss after some time. With the new release (which included some improvements on the NegativeBinomial, I'm getting a much lower loss value and it's taking longer for the problem to occur, but it still occurs.
I decided to put a conditional breakpoint in the log_prob() to stop whenever a NaN value is generated and used hybridize=False. It's super strange, but the error didn't happen yet (It's far past the point that it usually happens). I'll wait some more time, but it might be the case that it only happens with hybridize=True? If that's the case, it'd be very odd and problematic, since it runs much slower with hybridize=False.
To Reproduce
Unfortunately, I can't provide the dataset to reproduce the problem. But here are some information:
Error message or code output
Here is one example where it happened in the first epoch. But sometimes it happened later.
Environment