zalandoresearch / pytorch-ts

PyTorch based Probabilistic Time Series forecasting framework based on GluonTS backend
MIT License
1.24k stars 191 forks source link

Exception: Reached maximum number of idle transformation calls. #117

Open iliubenliang opened 1 year ago

iliubenliang commented 1 year ago

i downloaded the pytorch-ts-master.zip file from the project main page and installed it by using following command. python setup.py install

When i try to run the code in readme, I always get the following error. I cannot get rid of it, please help!

I have tried pip install pytorchts, but it seems that another error "ModuleNotFoundError: No module named 'gluonts.torch.modules.distribution_output'" pop up.

0% 0/49 [00:00<?, ?it/s]

Exception Traceback (most recent call last) Cell In [281], line 12 5 device = torch.device("cuda" if torch.cuda.is_available() else "cpu") 7 estimator = DeepAREstimator(freq="5min", 8 prediction_length=12, 9 input_size=19, 10 trainer=Trainer(epochs=10, 11 device=device)) ---> 12 predictor = estimator.train(training_data=training_data,num_workers=0)

File /usr/local/lib/python3.8/dist-packages/pytorchts-0.6.0-py3.8.egg/pts/model/estimator.py:179, in PyTorchEstimator.train(self, training_data, validation_data, num_workers, prefetch_factor, shuffle_buffer_length, cache_data, kwargs) 169 def train( 170 self, 171 training_data: Dataset, (...) 177 kwargs, 178 ) -> PyTorchPredictor: --> 179 return self.train_model( 180 training_data, 181 validation_data, 182 num_workers=num_workers, 183 prefetch_factor=prefetch_factor, 184 shuffle_buffer_length=shuffle_buffer_length, 185 cache_data=cache_data, 186 **kwargs, 187 ).predictor

File /usr/local/lib/python3.8/dist-packages/pytorchts-0.6.0-py3.8.egg/pts/model/estimator.py:151, in PyTorchEstimator.train_model(self, training_data, validation_data, num_workers, prefetch_factor, shuffle_buffer_length, cache_data, kwargs) 133 validation_iter_dataset = TransformedIterableDataset( 134 dataset=validation_data, 135 transform=transformation (...) 139 cache_data=cache_data, 140 ) 141 validation_data_loader = DataLoader( 142 validation_iter_dataset, 143 batch_size=self.trainer.batch_size, (...) 148 kwargs, 149 ) --> 151 self.trainer( 152 net=trained_net, 153 train_iter=training_data_loader, 154 validation_iter=validation_data_loader, 155 ) 157 return TrainOutput( 158 transformation=transformation, 159 trained_net=trained_net, (...) 162 ), 163 )

File /usr/local/lib/python3.8/dist-packages/pytorchts-0.6.0-py3.8.egg/pts/trainer.py:63, in Trainer.call(self, net, train_iter, validation_iter) 61 # training loop 62 with tqdm(train_iter, total=total) as it: ---> 63 for batch_no, data_entry in enumerate(it, start=1): 64 optimizer.zero_grad() 66 inputs = [v.to(self.device) for v in data_entry.values()]

File /usr/local/lib/python3.8/dist-packages/tqdm/notebook.py:259, in tqdm_notebook.iter(self) 257 try: 258 it = super(tqdm_notebook, self).iter() --> 259 for obj in it: 260 # return super(tqdm...) will not catch exception 261 yield obj 262 # NB: except ... [ as ...] breaks IPython async KeyboardInterrupt

File /usr/local/lib/python3.8/dist-packages/tqdm/std.py:1195, in tqdm.iter(self) 1192 time = self._time 1194 try: -> 1195 for obj in iterable: 1196 yield obj 1197 # Update and possibly print the progressbar. 1198 # Note: does not call self.update(1) for speed optimisation.

File /usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py:681, in _BaseDataLoaderIter.next(self) 678 if self._sampler_iter is None: 679 # TODO(https://github.com/pytorch/pytorch/issues/76750) 680 self._reset() # type: ignore[call-arg] --> 681 data = self._next_data() 682 self._num_yielded += 1 683 if self._dataset_kind == _DatasetKind.Iterable and \ 684 self._IterableDataset_len_called is not None and \ 685 self._num_yielded > self._IterableDataset_len_called:

File /usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py:721, in _SingleProcessDataLoaderIter._next_data(self) 719 def _next_data(self): 720 index = self._next_index() # may raise StopIteration --> 721 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 722 if self._pin_memory: 723 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)

File /usr/local/lib/python3.8/dist-packages/torch/utils/data/_utils/fetch.py:32, in _IterableDatasetFetcher.fetch(self, possibly_batchedindex) 30 for in possibly_batched_index: 31 try: ---> 32 data.append(next(self.dataset_iter)) 33 except StopIteration: 34 self.ended = True

File /usr/local/lib/python3.8/dist-packages/gluonts/transform/_base.py:103, in TransformedDataset.iter(self) 102 def iter(self) -> Iterator[DataEntry]: --> 103 yield from self.transformation( 104 self.base_dataset, is_train=self.is_train 105 )

File /usr/local/lib/python3.8/dist-packages/gluonts/transform/_base.py:124, in MapTransformation.call(self, data_it, is_train) 121 def call( 122 self, data_it: Iterable[DataEntry], is_train: bool 123 ) -> Iterator: --> 124 for data_entry in data_it: 125 try: 126 yield self.map_transform(data_entry.copy(), is_train)

File /usr/local/lib/python3.8/dist-packages/gluonts/transform/_base.py:189, in FlatMapTransformation.call(self, data_it, is_train) 182 yield result 184 if ( 185 # negative values disable the check 186 self.max_idle_transforms > 0 187 and num_idle_transforms > self.max_idle_transforms 188 ): --> 189 raise Exception( 190 "Reached maximum number of idle transformation" 191 " calls.\nThis means the transformation looped over" 192 f" {self.max_idle_transforms} inputs without returning any" 193 " output.\nThis occurred in the following" 194 f" transformation:\n{self}" 195 )

Exception: Reached maximum number of idle transformation calls. This means the transformation looped over 1 inputs without returning any output. This occurred in the following transformation: gluonts.transform.split.InstanceSplitter(dummy_value=0.0, forecast_start_field="forecast_start", future_length=12, instance_sampler=gluonts.transform.sampler.ExpectedNumInstanceSampler(axis=-1, min_past=0, min_future=12, num_instances=1.0, total_length=160095, n=15), is_pad_field="is_pad", lead_time=0, output_NTC=True, past_length=24, start_field="start", target_field="target", time_series_fields=["time_feat", "observed_values"])

ken-take-it-so-so commented 1 year ago

Please try pip uninstall pytorchts and pip install gluonts==0.10.0.

iliubenliang commented 1 year ago

Please try pip uninstall pytorchts and pip install gluonts==0.10.0.

works!thanks alot.

AbhayGoyal commented 1 year ago

But I get this issue at the same time.


ModuleNotFoundError Traceback (most recent call last) /tmp/ipykernel_1004158/3295418020.py in 1 from gluonts.dataset.multivariate_grouper import MultivariateGrouper 2 from gluonts.dataset.repository.datasets import dataset_recipes, get_dataset ----> 3 from pts.model.tempflow import TempFlowEstimator 4 from pts.model.transformer_tempflow import TransformerTempFlowEstimator 5 from pts import Trainer

/usr/local/lib/python3.8/dist-packages/pts/model/tempflow/init.py in ----> 1 from .tempflow_estimator import TempFlowEstimator 2 from .tempflow_network import TempFlowTrainingNetwork, TempFlowPredictionNetwork

/usr/local/lib/python3.8/dist-packages/pts/model/tempflow/tempflow_estimator.py in 35 from pts.model import PyTorchEstimator 36 ---> 37 from .tempflow_network import TempFlowTrainingNetwork, TempFlowPredictionNetwork 38 39

/usr/local/lib/python3.8/dist-packages/pts/model/tempflow/tempflow_network.py in 7 8 from pts.model import weighted_average ----> 9 from pts.modules import RealNVP, MAF, FlowOutput, MeanScaler, NOPScaler 10 11

/usr/local/lib/python3.8/dist-packages/pts/modules/init.py in ----> 1 from .distribution_output import ( 2 NormalOutput, 3 StudentTOutput, 4 BetaOutput, 5 PoissonOutput,

/usr/local/lib/python3.8/dist-packages/pts/modules/distribution_output.py in 32 ) 33 from gluonts.core.component import validated ---> 34 from gluonts.torch.modules.distribution_output import ( 35 DistributionOutput, 36 LambdaLayer,

ModuleNotFoundError: No module named 'gluonts.torch.modules.distribution_output'

kashif commented 1 year ago

sorry folks i have been sick lately, but these issues are due to the gluonts API changing and thus one can go back to an older version of gluonts or i need to update the API in pytorch-ts ...

AbhayGoyal commented 1 year ago

Which older version should we go to?

On Fri, Dec 16, 2022, 1:27 AM Kashif Rasul @.***> wrote:

sorry folks i have been sick lately, but these issues are due to the gluonts API changing and thus one can go back to an older version of gluonts or i need to update the API in pytorch-ts ...

— Reply to this email directly, view it on GitHub https://github.com/zalandoresearch/pytorch-ts/issues/117#issuecomment-1354327045, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEMF2JUZZO5YKL263QM6MIDWNQKVRANCNFSM6AAAAAARAWK2IY . You are receiving this because you commented.Message ID: @.***>

LemonCANDY42 commented 1 year ago

sorry folks i have been sick lately, but these issues are due to the gluonts API changing and thus one can go back to an older version of gluonts or i need to update the API in pytorch-ts ...

take care

LemonCANDY42 commented 1 year ago

Which older version should we go to? On Fri, Dec 16, 2022, 1:27 AM Kashif Rasul @.> wrote: sorry folks i have been sick lately, but these issues are due to the gluonts API changing and thus one can go back to an older version of gluonts or i need to update the API in pytorch-ts ... — Reply to this email directly, view it on GitHub <#117 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEMF2JUZZO5YKL263QM6MIDWNQKVRANCNFSM6AAAAAARAWK2IY . You are receiving this because you commented.Message ID: @.>

For now, I guess gluonts=0.10.0

kashif commented 1 year ago

yes i am in the process of updating the models to the new API in the 0.7.0 branch....

LemonCANDY42 commented 1 year ago

yes i am in the process of updating the models to the new API in the 0.7.0 branch....

Awesome! can you please review my PR if it helps.