time-series-foundation-models / lag-llama

Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
Apache License 2.0
1.08k stars 121 forks source link

How to reuse best checkpoint and turned hparams.yaml to predict results #39

Open YonDraco opened 2 months ago

YonDraco commented 2 months ago

I used best checkpoint after turning then used get_lag_llama_predictions function to predict from best checkpoint . However, the predicted result is much lower than when I turned and trained that checkpoint.

ashok-arjun commented 2 months ago

Sorry, I don't understand. Can you elaborate? Which checkpoint did you use?

YonDraco commented 2 months ago

@ashok-arjun I saved the best checkpoint epoch=36-step=1850.ckpt after turning and used the get_lag_llama_predictions function as in colab demo 2 to make predictions from this checkpoint. However, when loading this checkpoint to make predictions, the results are worse than when turning. So I think I will have to load both checkpoint and hparams.yml but I don't know how to handle them

ashok-arjun commented 2 months ago

Sorry, I don't understand what "turning" is. Do you mean finetuning or training?