Closed quanpr closed 5 months ago
Hi Pengrui, Thanks for you interest in MOMENT!
I see that you are using MOMENT in forecasting mode (task_name = 'forecasting'
):
model = MOMENTPipeline.from_pretrained( 'AutonLab/MOMENT-1-large',
model_kwargs={ 'task_name': 'forecasting', 'forecast_horizon': prediction_length, 'head_dropout': 0.1, 'weight_decay': 0 }, )
In this mode, a randomly initialized linear layer maps embeddings of historical time series to the forecasting horizon. Hence, the forecasts are essentially random.
Here's 2 ways to use MOMENT:
task_name = 'reconstruction'
), and use masked reconstruction for forecasting by masking the last 1 or 2 patches, corresponding to a forecasting horizon of $8$ and $16$, respectively. You can use mask
in the model forward pass, as shown in the imputation notebook to reconstruct the patches and forecast future time steps. Let us know if you have any more questions! And thanks again for your interest in MOMENT!
Thanks so much for the detailed illustration!
Hi,
Thanks for sharing the good work.
I am using Moment models for zero-shot time series forecasting. The result of forecasting a synthetic sine wave is really bad. I wonder if I've missed anything. Or is Moment intended to work in a fine-tuned manner?
The model I used was:
model = MOMENTPipeline.from_pretrained( 'AutonLab/MOMENT-1-large', model_kwargs={ 'task_name': 'forecasting', 'forecast_horizon': prediction_length, 'head_dropout': 0.1, 'weight_decay': 0 }, )
The prediction horizon is 64, and the lookback horizon is 512.
Best, Pengrui