jdb78 / pytorch-forecasting

Time series forecasting with PyTorch
https://pytorch-forecasting.readthedocs.io/
MIT License
3.84k stars 608 forks source link

N-beats error #110

Closed andrewcztrack closed 3 years ago

andrewcztrack commented 3 years ago

Hi ! love the look of the package! looks amazing!

I am getting an error for the nbeats example.

load data GPU available: False, used: False TPU available: False, using: 0 TPU cores Number of parameters in network: 1859.9k

| Name | Type | Params

0 | loss | SMAPE | 0
1 | logging_metrics | ModuleList | 0
2 | net_blocks | ModuleList | 1 M
Validation sanity check: 0it [00:00, ?it/s] /home/andrewcz/miniconda3/envs/myenv/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:45: UserWarning: The dataloader, val dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the num_workers argument(try 4 which is the number of cpus on this machine) in theDataLoader` init to improve performance. warnings.warn(*args, **kwargs)

TypeError Traceback (most recent call last)

in 84 # net.hparams.learning_rate = res.suggestion() 85 ---> 86 trainer.fit( 87 net, 88 train_dataloader=train_dataloader, ~/miniconda3/envs/myenv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in fit(self, model, train_dataloader, val_dataloaders, datamodule) 438 self.call_hook('on_fit_start') 439 --> 440 results = self.accelerator_backend.train() 441 self.accelerator_backend.teardown() 442 ~/miniconda3/envs/myenv/lib/python3.8/site-packages/pytorch_lightning/accelerators/cpu_accelerator.py in train(self) 46 47 # train or test ---> 48 results = self.train_or_test() 49 return results 50 ~/miniconda3/envs/myenv/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py in train_or_test(self) 64 results = self.trainer.run_test() 65 else: ---> 66 results = self.trainer.train() 67 return results 68 ~/miniconda3/envs/myenv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in train(self) 460 461 def train(self): --> 462 self.run_sanity_check(self.get_model()) 463 464 # enable train mode ~/miniconda3/envs/myenv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in run_sanity_check(self, ref_model) 646 647 # run eval step --> 648 _, eval_results = self.run_evaluation(test_mode=False, max_batches=self.num_sanity_val_batches) 649 650 # allow no returns from eval ~/miniconda3/envs/myenv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in run_evaluation(self, test_mode, max_batches) 554 dl_max_batches = self.evaluation_loop.max_batches[dataloader_idx] 555 --> 556 for batch_idx, batch in enumerate(dataloader): 557 if batch is None: 558 continue ~/miniconda3/envs/myenv/lib/python3.8/site-packages/torch/utils/data/dataloader.py in __next__(self) 343 344 def __next__(self): --> 345 data = self._next_data() 346 self._num_yielded += 1 347 if self._dataset_kind == _DatasetKind.Iterable and \ ~/miniconda3/envs/myenv/lib/python3.8/site-packages/torch/utils/data/dataloader.py in _next_data(self) 854 else: 855 del self._task_info[idx] --> 856 return self._process_data(data) 857 858 def _try_put_index(self): ~/miniconda3/envs/myenv/lib/python3.8/site-packages/torch/utils/data/dataloader.py in _process_data(self, data) 879 self._try_put_index() 880 if isinstance(data, ExceptionWrapper): --> 881 data.reraise() 882 return data 883 ~/miniconda3/envs/myenv/lib/python3.8/site-packages/torch/_utils.py in reraise(self) 392 # (https://bugs.python.org/issue2651), so we work around it. 393 msg = KeyErrorMessage(msg) --> 394 raise self.exc_type(msg) TypeError: Caught TypeError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/andrewcz/miniconda3/envs/myenv/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "/home/andrewcz/miniconda3/envs/myenv/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/andrewcz/miniconda3/envs/myenv/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/andrewcz/miniconda3/envs/myenv/lib/python3.8/site-packages/pytorch_forecasting/data/timeseries.py", line 968, in __getitem__ self.target_normalizer.fit(target[:encoder_length]) TypeError: only integer tensors of a single element can be converted to an index
jdb78 commented 3 years ago

Are you running with the latest version of pytorch and pytorch-forecasting?

BTW: There is a consequence of the new metric system/bug (which is being fixed now in #112 ) that forces you to specify the loss you want to optimize (i.e. you have to use .from_dataset(loss=MASE()) when you create your network.

andrewcztrack commented 3 years ago

Hi @jdb78 !! i hope your well! thank you for your reply. i was just running your tutorials from the notebook. i tried to reinstall the library but the errors still comes up. Best, Andrew

jdb78 commented 3 years ago

Have you tried with pytorch 1.6? Conda forge doesn’t not allow specifying higher versions than 1.4.

andrewcztrack commented 3 years ago

Done! and it is working! Thank you :).