time-series-foundation-models / lag-llama

Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
Apache License 2.0
1.26k stars 156 forks source link

Is your pre-training "self-supervised"? #76

Open well0203 opened 5 months ago

well0203 commented 5 months ago

Hi, I found in one literature review that they categorised your model as "Self-supervised generative", but in your article as I understood, you are using labels during pre-training (so basically the same approach as during fine-tuning). Could you please help to clarify this for me?

ashok-arjun commented 5 months ago

Hi! Thanks for the issue.

In time series forecasting, there are no separate labels. During training, we just use the data of a context length and do 1-step ahead prediction at each timestep.

So I'm not sure we fall under the "supervised" bucket. Although we are not "self-supervised" in the same definition as other papers, I'd say it's an acceptable categorization of our model.