time-series-foundation-models / lag-llama

Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
Apache License 2.0
1.26k stars 156 forks source link

A quick guide to fine-tune from pandas dataframe #47

Open pranjal-joshi opened 7 months ago

pranjal-joshi commented 7 months ago

Hi, The current fine-tuning notebook on colab uses gluon-ts datasets. Can you quickly demonstrate fine-tuning for any open-source dataset in a pandas dataframe? That will be helpful for the larger audience. Thanks.

ashok-arjun commented 7 months ago

Hi Pranjal,

I agree; that'd be useful.

I haven't got the time to get to making a full-fledged demo for finetuning yet, unfortunately.

I initially expected that things would work out of the box when the code to load data for any dataset (taken from Colab demo 1) is put together with the code to finetune. I haven't tested this yet. Have you tried doing this?

julian-vp commented 7 months ago

Hi Arjun,

Great job with this project!

I think what would be useful is if you could point us to some examples how to prepare the train and test datasets for the fine tunning code using from pandas dataframe.

Regards.

pranjal-joshi commented 7 months ago

Hi @ashok-arjun

The Colab-1 code worked to create a GluonTS dataset from pandas and the zero shot forecasting also worked on it. I will update here once I experiment with the fine-tuning using this dataset.

Thanks for guiding me in the correct direction.

valentinafarrelli commented 1 month ago

Same strategy as for zero shot is working. E.g.:

predictor = estimator.train(PandasDataset(df, target = 'your_target_variable'), cache_data=True, shuffle_buffer_length=10)