Closed lxs211 closed 2 months ago
Thanks for using neuralforecast!
If I understand correctly, your question is that you don't understand why you are observing forecast errors that do not match your expectation reading the results of the paper of AutoFormer?
In general, we make sure the performance of the model in our libraries is close to or better than the results as produced in the original paper. Hence, when you observe these kind of results, the most likely explanation is that there is something incorrect in the training pipeline, as is the case.
There are two issues:
max_steps
is not the number of epochs, but the number of training iterations). Second, the input_size
is not correct, it should be 36 (as in the paper). Thirdly, the paper uses MSE as loss function. Finally, the frequency for the dataset in your code is incorrect. I've made a number of changes to more closely follow the original paper. See below the changes you should make:from neuralforecast.losses.pytorch import MSE
models = [
Autoformer(h=horizon,
input_size=36,
max_steps=1000,
val_check_steps=100,
early_stop_patience_steps=3,
loss=MSE()
),
]
nf = NeuralForecast(
models=models,
freq='W')
This gives me the following results: MAE: 1.325; MSE: 3.680
which is close enough to the original paper.
Does this solve your issue?
Thank you very much for your answer.
What happened + What you expected to happen
Use the library autoformer code to realize the effect is worse than the source code, the source code effect is obviously good The effect of the source code is as follows
Effects in the neuralforecast library Autoformer:MAE: 1.599,MSE:4.801
Versions / Dependencies
neuralforecast==1.7.0
Reproduction script
import pandas as pd
from datasetsforecast.long_horizon import LongHorizon from neuralforecast.core import NeuralForecast from neuralforecast.models import Autoformer, FEDformer,NLinear, PatchTST import matplotlib.pyplot as plt from neuralforecast.losses.numpy import mae, mse
Change this to your own data to try the model
Ydf, , _ = LongHorizon.load(directory='./', group='ILI') Y_df['ds'] = pd.to_datetime(Y_df['ds'])
n_time = len(Y_df.ds.unique()) val_size = int(.1 n_time) test_size = int(.2 n_time)
Y_df.groupby('unique_id').head(2) horizon = 24 # 24hrs = 4 * 15 min. models = [ Autoformer(h=horizon, input_size=horizon, max_steps=200, val_check_steps=100, early_stop_patience_steps=3), ] nf = NeuralForecast( models=models, freq='15min')
Y_hat_df = nf.cross_validation(df=Y_df, val_size=val_size, test_size=test_size, n_windows=None) Y_hat_df.head()
Y_hat_df = Y_hat_df.reset_index()
Y_plot = Y_hat_df[Y_hat_df['unique_id']=='OT'] # OT dataset cutoffs = Y_hat_df['cutoff'].unique()[::horizon] Y_plot = Y_plot[Y_hat_df['cutoff'].isin(cutoffs)]
mae_autoformer = mae(Y_hat_df['y'], Y_hat_df['Autoformer']) mse_autoformer = mse(Y_hat_df['y'], Y_hat_df['Autoformer'])
print(f'Autoformer:MAE: {mae_autoformer:.3f},MSE:{mse_autoformer:.3f}')
Issue Severity
Low: It annoys or frustrates me.