zalandoresearch / pytorch-ts

PyTorch based Probabilistic Time Series forecasting framework based on GluonTS backend
MIT License
1.21k stars 191 forks source link

Issue running example on Windows pc #44

Open pieterseyns opened 3 years ago

pieterseyns commented 3 years ago

Hey,

I ran into an issue while testing your example code. I use a windows pc with cpu. Latest version of torch is installed 1.7.1.

Any idea what could resolve the issue?

Thanks, Pieter


RuntimeError Traceback (most recent call last)

in 6 trainer=Trainer(epochs=10, 7 device=device)) ----> 8 predictor = estimator.train(training_data=training_data, num_workers=2) ~\AppData\Local\Continuum\anaconda3\lib\site-packages\pts\model\estimator.py in train(self, training_data, validation_data, num_workers, prefetch_factor, shuffle_buffer_length, cache_data, **kwargs) 171 shuffle_buffer_length=shuffle_buffer_length, 172 cache_data=cache_data, --> 173 **kwargs, 174 ).predictor ~\AppData\Local\Continuum\anaconda3\lib\site-packages\pts\model\estimator.py in train_model(self, training_data, validation_data, num_workers, prefetch_factor, shuffle_buffer_length, cache_data, **kwargs) 143 net=trained_net, 144 train_iter=training_data_loader, --> 145 validation_iter=validation_data_loader, 146 ) 147 ~\AppData\Local\Continuum\anaconda3\lib\site-packages\pts\trainer.py in __call__(self, net, train_iter, validation_iter) 68 inputs = [v.to(self.device) for v in data_entry.values()] 69 ---> 70 output = net(*inputs) 71 if isinstance(output, (list, tuple)): 72 loss = output[0] ~\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~\AppData\Local\Continuum\anaconda3\lib\site-packages\pts\model\deepar\deepar_network.py in forward(self, feat_static_cat, feat_static_real, past_time_feat, past_target, past_observed_values, future_time_feat, future_target, future_observed_values) 252 future_time_feat=future_time_feat, 253 future_target=future_target, --> 254 future_observed_values=future_observed_values, 255 ) 256 ~\AppData\Local\Continuum\anaconda3\lib\site-packages\pts\model\deepar\deepar_network.py in distribution(self, feat_static_cat, feat_static_real, past_time_feat, past_target, past_observed_values, future_time_feat, future_target, future_observed_values) 226 past_observed_values=past_observed_values, 227 future_time_feat=future_time_feat, --> 228 future_target=future_target, 229 ) 230 ~\AppData\Local\Continuum\anaconda3\lib\site-packages\pts\model\deepar\deepar_network.py in unroll_encoder(self, feat_static_cat, feat_static_real, past_time_feat, past_target, past_observed_values, future_time_feat, future_target) 166 167 # (batch_size, num_features) --> 168 embedded_cat = self.embedder(feat_static_cat) 169 170 # in addition to embedding features, use the log scale as it can help ~\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~\AppData\Local\Continuum\anaconda3\lib\site-packages\pts\modules\feature.py in forward(self, features) 35 embed(cat_feature_slice.squeeze(-1)) 36 for embed, cat_feature_slice in zip( ---> 37 self.__embedders, cat_feature_slices 38 ) 39 ], ~\AppData\Local\Continuum\anaconda3\lib\site-packages\pts\modules\feature.py in (.0) 34 [ 35 embed(cat_feature_slice.squeeze(-1)) ---> 36 for embed, cat_feature_slice in zip( 37 self.__embedders, cat_feature_slices 38 ) ~\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\sparse.py in forward(self, input) 124 return F.embedding( 125 input, self.weight, self.padding_idx, self.max_norm, --> 126 self.norm_type, self.scale_grad_by_freq, self.sparse) 127 128 def extra_repr(self) -> str: ~\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1850 # remove once script supports set_grad_enabled 1851 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 1852 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1853 1854 RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.IntTensor instead (while checking arguments for embedding)
kashif commented 3 years ago

thanks for the report! let me see if I can try to replicate it... I don't have windows but I can perhaps figure out the issue from your report.

cmapz2 commented 2 years ago

See the other issues for windows PC:

1) num_workers = None 2) Adjust the input size