time-series-foundation-models / lag-llama

Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
Apache License 2.0
1.09k stars 121 forks source link

Problems on fine-tuning on my own data #29

Open FuZixin opened 3 months ago

FuZixin commented 3 months ago

I'm trying to fine-tune the model using my own data. When using the fine-tuned model to make predictions, I find that lag-llama gives all zero value predictions, which does not reproduce the fine-tuning effect in the demo. I followed the following steps:

  1. Call _from_longdataframe to convert the data from the dataframe format to the standard format. 20240315-152202(WeLinkPC) 20240315-152206(WeLinkPC)

  2. The data contains 7000 second-level data points. I use the first 4000 points for fine-tuning training, and the remaining 3000 data points as model input. The task is to predict the last 60 data points. 20240315-152011(WeLinkPC)

  3. The prediction result is shown in the following figure. The result provided by lag-llama is 60 zero values. 20240315-151958(WeLinkPC)

What is the cause of this, or Which step did I do wrong?

ashok-arjun commented 3 months ago

Hi @FuZixin, thanks for the detailed description of the issue.

Thanks, Arjun

FuZixin commented 3 months ago

Thank you for your reply:

Q: However, the predictions based on the fine-tuned model show a large offset from the true value, we are currently working on this issue. 20240321-173711(WeLinkPC) If you have any comments or opinions, please feel free to communicate with us :)

ashok-arjun commented 2 months ago

I see, that's very useful to know.

We have updated the repo with best practices for finetuning. Maybe that could help.

simona-0 commented 1 day ago

Hello @FuZixin , have you maybe set nonnegative_pred_samples=True when you called LagLlamaEstimator? By setting this True you only allow the model to give positive predictions.