time-series-foundation-models / lag-llama

Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
Apache License 2.0
1.08k stars 121 forks source link

Measuring Perplexity (PPL)? #59

Closed hotdogstar closed 1 month ago

hotdogstar commented 1 month ago

Hi there! First off, great job on the paper and model. I was wondering if you know a method to measure a forecasted timeseries perplexity? I understand that samples are calculated using the idea from IQN for Distributional Reinforcement Learning, but is there still a way to get all probabilities/logits for the next forecast so as to calculate this?

ashok-arjun commented 1 month ago

Hi, thanks for the kind words.

We use negative log likelihood as the loss function, so you can basically obtain the negative log likelihood of any prediction (by prediction I mean the one-step ahead forecast that we use during the loss; if you want the prediction to mean a multi-step ahead forecast, ensure you set the estimator to the right prediction length before you perform the forecast), when you pass it through our pretrained model. It's basically the loss. I believe you can calculate the perplexity from this value.

I'm not sure what you mean by obtaining the probabilities/logits of the next forecast, as our model does not give discrete outputs. But again, you don't need such probs/logits to compute perplexity.

hotdogstar commented 1 month ago

Thanks a lot!