I'm training a GLUONTS model in AWS sagemaker and the model works fine for datasets with <140k item_ids, or 140k different time-series but it fails in the validation step for datasets with more than 140k item_ids. Why could that be?
The error is INDEX OUT OF RANGE IN SELF -- googling shows that this is something related to tensorflow or pytorch
To Reproduce
agg_metrics, item_metrics = evaluator(actual_it, forecast_it) File "/opt/conda/lib/python3.10/site-packages/gluonts/evaluation/_base.py", line 264, in call
agg_metrics, item_metrics = evaluator(actual_it, forecast_it) File "/opt/conda/lib/python3.10/site-packages/gluonts/evaluation/_base.py", line 264, in __call__
--
Error message or code output
(Paste the complete error message, including stack trace, or the undesired output that the above snippet produces.)
index out of range in self
Environment
Operating system: Sagemaker
Python version: 3.10
GluonTS version: Newest
MXNet version: Pytorch not mxnet
(Add as much information about your environment as possible, e.g. dependencies versions.)
Description
I'm training a GLUONTS model in AWS sagemaker and the model works fine for datasets with <140k item_ids, or 140k different time-series but it fails in the validation step for datasets with more than 140k item_ids. Why could that be? The error is INDEX OUT OF RANGE IN SELF -- googling shows that this is something related to tensorflow or pytorch
To Reproduce
agg_metrics, item_metrics = evaluator(actual_it, forecast_it) File "/opt/conda/lib/python3.10/site-packages/gluonts/evaluation/_base.py", line 264, in call
Error message or code output
(Paste the complete error message, including stack trace, or the undesired output that the above snippet produces.)
Environment
(Add as much information about your environment as possible, e.g. dependencies versions.)