sktime / pytorch-forecasting

Time series forecasting with PyTorch
https://pytorch-forecasting.readthedocs.io/
MIT License
3.98k stars 631 forks source link

Receiving Error When not feeding the model with time_idx as input for time_varying_known_reals #290

Closed OS199301 closed 3 years ago

OS199301 commented 3 years ago

Hello,

Does the model must have the time_idx as an input for the time_varying_known_reals argument?

When I generate the TimeSeriesDataSet object, and give the model as input in argument time_varying_known_reals=["time_idx"], I can train my model.

For my model I do not want the time_idx to be an input. But when I insert time_varying_known_reals=[] (I still assign it to the time_idx argument, just not to the time_varying_known_reals), I get an error when training the model (the error trace is at the end of this issue). Is there a reason for this behaviour?

Exception has occurred: StopIteration (note: full exception trace is shown but execution is paused at: _run_module_as_main)

File "D:\Projects\PythonProject\pytorch_forecasting\models\temporal_fusion_transformer\sub_modules.py", line 341, in forward name = next(iter(self.single_variable_grns.keys())) File "D:\Projects\PythonProject\dev\Lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, *kwargs) File "D:\Projects\PythonProject\pytorch_forecasting\models\temporal_fusion_transformer__init__.py", line 407, in forward static_context_variable_selection[:, max_encoder_length:], File "D:\Projects\PythonProject\ \dev\Lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(input, kwargs) File "D:\Projects\PythonProject \pytorch_forecasting\models\base_model.py", line 269, in step out = self(x, kwargs) File "D:\Projects\PythonProject \pytorch_forecasting\models\temporal_fusion_transformer__init__.py", line 482, in step log, out = super().step(x, y, batch_idx) File "D:\Projects\PythonProject \pytorch_forecasting\models\base_model.py", line 204, in validationstep log, = self.step(x, y, batch_idx) # log loss File "D:\Projects\PythonProject\dev\Lib\site-packages\pytorch_lightning\accelerators\cpu_accelerator.py", line 70, in _step output = model_step(*args) File "D:\Projects\PythonProject\dev\Lib\site-packages\pytorch_lightning\accelerators\cpu_accelerator.py", line 77, in validation_step return self._step(self.trainer.model.validation_step, args) File "D:\Projects\PythonProject\dev\Lib\site-packages\pytorch_lightning\trainer\evaluation_loop.py", line 178, in evaluation_step output = self.trainer.accelerator_backend.validation_step(args) File "D:\Projects\PythonProject \dev\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 608, in run_evaluation output = self.evaluation_loop.evaluation_step(test_mode, batch, batch_idx, dataloader_idx) File "D:\Projects\PythonProject \dev\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 692, in run_sanitycheck , eval_results = self.run_evaluation(test_mode=False, max_batches=self.num_sanity_val_batches) File "D:\Projects\PythonProject \dev\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 494, in train self.run_sanity_check(self.get_model()) File "D:\Projects\PythonProject \dev\Lib\site-packages\pytorch_lightning\accelerators\accelerator.py", line 69, in train_or_test results = self.trainer.train() File "D:\Projects\PythonProject\dev\Lib\site-packages\pytorch_lightning\accelerators\cpu_accelerator.py", line 62, in train results = self.train_or_test() File "D:\Projects\PythonProject\dev\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 472, in fit results = self.accelerator_backend.train()

geriskenderi commented 3 years ago

I can also confirm that this issue exists. I assume that the time_idx as a known real value might be needed since this is a Transformer based architecture (for positional encoding and masking purposes), but I am not sure if that is the case .

jdb78 commented 3 years ago

Yes. That is right. You need it for the temporal fusion transformer.