zalandoresearch / pytorch-ts

PyTorch based Probabilistic Time Series forecasting framework based on GluonTS backend
MIT License
1.25k stars 191 forks source link

Density estimation #91

Open ZhuangweiKang opened 2 years ago

ZhuangweiKang commented 2 years ago

Hi, this is amazing work. Is that possible to get the log-likelihood of predicted samples using this framework?

kashif commented 2 years ago

thanks! well typically the loss is the log-likelihood of the samples... but you are right I do not log it during inference. Let me see if I can add that...

ZhuangweiKang commented 2 years ago

Thank you for answering the question. For inference, the paper shows that the model can start from some initial warm-up time-series then iteratively call the RNN and flow till the inference horizon. Could you please point out which function implements this?

kashif commented 2 years ago

@ZhuangweiKang this is a property of all autoregressive models for example have a look at:

https://github.com/zalandoresearch/pytorch-ts/blob/master/pts/model/deepar/deepar_network.py#L361

for the loop over the prediction length where the model samples the next time step and concats it and then samples again...

ZhuangweiKang commented 2 years ago

@kashif Thank you very much. This helps a lot. So when I make inference, I only need to use a warm-up time series as input of the predict() function. It will produce the same number of samples as the prediction_length specified in the training function. Is that correct?

kashif commented 2 years ago

yes i believe so... although it will produce a tensor of shape [B, S, T, 1] where B is the batch size, S is the number of samples from the distribution (e.g. 100), T is the prediction length and for univariate you get a single output and for multivariate it will be the multivariate dim.

ZhuangweiKang commented 2 years ago

I didn't see the batch size dimension.

kashif commented 2 years ago

it's implied by the -1 so that it works for any batch size: https://github.com/zalandoresearch/pytorch-ts/blob/master/pts/model/deepar/deepar_network.py#L407

ZhuangweiKang commented 2 years ago

Does this mean I have to retrain the model if I want to forecast a different size?

kashif commented 2 years ago

no I do not think so...

ZhuangweiKang commented 2 years ago

So how to reset the prediction length when I call the predict function? Thanks.

def predict( self, dataset: Dataset, num_samples: Optional[int] = None ) -> Iterator[Forecast]: inference_data_loader = InferenceDataLoader( dataset, transform=self.input_transform, batch_size=self.batch_size, stack_fn=lambda data: batchify(data, self.device), )

kashif commented 2 years ago

so i do not know what you are trying to do... but typically you set the prediction length to be as large as you have test data for... so that you can then compare the resulting metrics...

if you require smaller prediction length to what you train with then just concat the prediction... if you want bigger then just train with a bigger prediction length...

ZhuangweiKang commented 2 years ago

Got it. Thanks for your help.

hanlaoshi commented 1 year ago

no I do not think so...

From the source code, I think if changing the prediction length, one must retrain the model. because the input to rnn must be the tensor of (batch_size, sub_seq_len, input_dim), and sub_seq_len=context_length+prediction_length. So, I mean if you want to change the prediction length, you must change the meta information from dataset.metadata.predition_length and redivide the dataset.train and dataset.test... Am I to understand it this way?

sergio825 commented 9 months ago

thanks! well typically the loss is the log-likelihood of the samples... but you are right I do not log it during inference. Let me see if I can add that...

Hello, could this option be implemented? I would like to access the log likelihood of each series.