Open ZhuangweiKang opened 2 years ago
thanks! well typically the loss
is the log-likelihood of the samples... but you are right I do not log it during inference. Let me see if I can add that...
Thank you for answering the question. For inference, the paper shows that the model can start from some initial warm-up time-series then iteratively call the RNN and flow till the inference horizon. Could you please point out which function implements this?
@ZhuangweiKang this is a property of all autoregressive models for example have a look at:
https://github.com/zalandoresearch/pytorch-ts/blob/master/pts/model/deepar/deepar_network.py#L361
for the loop over the prediction length where the model samples the next time step and concats it and then samples again...
@kashif Thank you very much. This helps a lot. So when I make inference, I only need to use a warm-up time series as input of the predict() function. It will produce the same number of samples as the prediction_length specified in the training function. Is that correct?
yes i believe so... although it will produce a tensor of shape [B, S, T, 1]
where B
is the batch size, S
is the number of samples from the distribution (e.g. 100), T
is the prediction length and for univariate you get a single output and for multivariate it will be the multivariate dim.
I didn't see the batch size dimension.
it's implied by the -1
so that it works for any batch size: https://github.com/zalandoresearch/pytorch-ts/blob/master/pts/model/deepar/deepar_network.py#L407
Does this mean I have to retrain the model if I want to forecast a different size?
no I do not think so...
So how to reset the prediction length when I call the predict function? Thanks.
def predict( self, dataset: Dataset, num_samples: Optional[int] = None ) -> Iterator[Forecast]: inference_data_loader = InferenceDataLoader( dataset, transform=self.input_transform, batch_size=self.batch_size, stack_fn=lambda data: batchify(data, self.device), )
so i do not know what you are trying to do... but typically you set the prediction length to be as large as you have test data for... so that you can then compare the resulting metrics...
if you require smaller prediction length to what you train with then just concat the prediction... if you want bigger then just train with a bigger prediction length...
Got it. Thanks for your help.
no I do not think so...
From the source code, I think if changing the prediction length, one must retrain the model. because the input to rnn must be the tensor of (batch_size, sub_seq_len, input_dim), and sub_seq_len=context_length+prediction_length. So, I mean if you want to change the prediction length, you must change the meta information from dataset.metadata.predition_length and redivide the dataset.train and dataset.test... Am I to understand it this way?
thanks! well typically the
loss
is the log-likelihood of the samples... but you are right I do not log it during inference. Let me see if I can add that...
Hello, could this option be implemented? I would like to access the log likelihood of each series.
Hi, this is amazing work. Is that possible to get the log-likelihood of predicted samples using this framework?