time-series-foundation-models / lag-llama

Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
Apache License 2.0
1.08k stars 121 forks source link

What Lags you use when pre-training the lag-llama? #36

Open SpeeeedLee opened 3 months ago

SpeeeedLee commented 3 months ago

Hello, it's Arthur, Thank you for your great work. I would like to ask whether it is possible to get the specific lag indices you use during the pre-training or zero-shot phases.

In the Colab tutorial notebook, it is indicated that the context length is set to 32, and the maximum potential lag index could be 1092. However, the exact indices used to tokenize the 32 historical time points remain unclear to me. Do you employ all of the 1092 lags, or is there a specific subset that is used?

Thank you!

SpeeeedLee commented 3 months ago

Also, I do not fully understand how the model gets the prediction for multiple future time points autoregressively at inference time.

Say I want to make 3 future predictions, and 100 trajectories, does it work as follows?

for _ in range(100):

  1. Get the t-distribution parameters (Parameter_1) for the future first day.
  2. Sample one data (Data_1) from t-distribution with Parameter_1.
  3. Get the Parameter_2, by including the Data_1
  4. Sample one data (Data_3) from t-distribution with Parameter_2.
  5. trajectories.append([Data_1, Data_2, Data_3])

Additionally, it would be great if you could give me a hint on where I can find the detail code for autoregressive prediction, thanks!

sudongwang-upc commented 3 months ago

https://github.com/time-series-foundation-models/lag-llama/blob/7454088166d02cb49d3d364a56c8379ac60325f5/lag_llama/gluon/lightning_module.py#L229-L261

ashok-arjun commented 2 months ago

Hello, it's Arthur, Thank you for your great work. I would like to ask whether it is possible to get the specific lag indices you use during the pre-training or zero-shot phases.

In the Colab tutorial notebook, it is indicated that the context length is set to 32, and the maximum potential lag index could be 1092. However, the exact indices used to tokenize the 32 historical time points remain unclear to me. Do you employ all of the 1092 lags, or is there a specific subset that is used?

Thank you!

Hi @SpeeeedLee .

The 32 historical time series points are consecutive, and sampled before the timestep to be predicted.

The lags however are sampled possibly even beyond this 32-length context, but sparsely as denoted by the lag indices. The figure below might be useful to clarify the difference. Screenshot 2024-04-05 at 1 33 54 PM

As for the indices of the lags, in our experiments, we sample lags of certain frequencies, upto a certain length.

The frequencies are denoted here: https://github.com/time-series-foundation-models/lag-llama/blob/35f62a9973e87c5089cd2e199c6ef2f3093b851e/lag_llama/gluon/estimator.py#L137

The corresponding code to sample lags is here. We use the get_lags_for_frequency function of GluonTS in our code: https://github.com/time-series-foundation-models/lag-llama/blob/35f62a9973e87c5089cd2e199c6ef2f3093b851e/lag_llama/gluon/estimator.py#L158-L161

To give an example, the lags of the "D" frequency (daily) frequency look like this:

[0, 7, 12, 13, 14, 19, 20, 21, 26, 27, 28, 29, 30, 55, 83, 362, 363, 364, 726, 727, 728, 1090, 1091, 1092]

As for the actual lag indices that come from all these frequencies:

[0, 7, 8, 10, 11, 12, 13, 14, 19, 20, 21, 22, 23, 24, 26, 27, 28, 29, 30, 34, 35, 36, 46, 47, 48, 50, 51, 52, 55, 57, 58, 59, 60, 61, 70, 71, 72, 83, 94, 95, 96, 102, 103, 104, 117, 118, 119, 120, 121, 142, 143, 144, 154, 155, 156, 166, 167, 168, 177, 178, 179, 180, 181, 334, 335, 336, 362, 363, 364, 502, 503, 504, 670, 671, 672, 718, 719, 720, 726, 727, 728, 1090, 1091, 1092]

For the code GluonTS uses to generate these lag indices, you can refer to the source code of the get_lags_for_frequency function.

ashok-arjun commented 2 months ago

Feel free to follow up if you have clarifications or close the issue if it answers your questions. Thanks!

YuMeng2v commented 2 months ago

Hi guys, I actually wanna run pre-trained model and I find if I want to load lags_seq from pre-trained model. For example: LagLlamaEstimator( ckpt_path="lag-llama.ckpt", prediction_length=prediction_length, context_length=context_length, nonnegative_pred_samples=True, aug_prob=0, lr=5e-4, lags_seq = estimator_args["lags_seq"], ) And the code comes to an error and the lags_seq should be like ['M', 'D', ...str] not [0, 7, 8, 10, 11, 12, 13, 14, 19, 20, 21, 22, 23, 24, 26, 27, 28, 29, 30, 34, 35...] you give.

arthur-b1 commented 2 months ago

Hi,

Should I adjust lags_seq based on the frequency of my time series? For daily data, is it correct to use lags_seq = ["Q", "M", "W", "D"] and exclude hour, minute, and second lags since they might not make sense for daily intervals?

Thanks for your help.

ashok-arjun commented 2 months ago

@YuMeng2v The lags sequence cannot be modified for a pretrained model. So if you're loading from the released model, lags_seq parameter shouldn't be passed.

If you are training your own model from scratch (not finetuning), you may set that parameter using the frequencies.

ashok-arjun commented 2 months ago

@arthur-b1 The model uses all lags as it is a generic model independent of the frequency. The lags cannot be modified for a trained model as the first MLP depends on the number of lags.

For your own data, ideally the most useful lag would be that of your frequency, but the other lags shouldn't affect it.

YuMeng2v commented 2 months ago

Thank you! I didn't pass the ckpt path to the estimator, I think it's actually pre-train?

@YuMeng2v The lags sequence cannot be modified for a pretrained model. So if you're loading from the released model, lags_seq parameter shouldn't be passed.

If you are training your own model from scratch (not finetuning), you may set that parameter using the frequencies.

ashok-arjun commented 2 months ago

You are passing the ckpt_path as seen in this code

Hi guys, I actually wanna run pre-trained model and I find if I want to load lags_seq from pre-trained model. For example: LagLlamaEstimator( ckpt_path="lag-llama.ckpt", prediction_length=prediction_length, context_length=context_length, nonnegative_pred_samples=True, aug_prob=0, lr=5e-4, lags_seq = estimator_args["lags_seq"], ) And the code comes to an error and the lags_seq should be like ['M', 'D', ...str] not [0, 7, 8, 10, 11, 12, 13, 14, 19, 20, 21, 22, 23, 24, 26, 27, 28, 29, 30, 34, 35...] you give.

ashok-arjun commented 3 weeks ago

Hi, just checking if this issue is resolved.