Open Kevin-Chen0 opened 3 months ago
The Meta Linear Regression model gives weird outputs sometimes, need to look into it. The meta WA model is the most consistent in terms of performance
We need to update the default model configuration to use Meta WA everywhere
@SairamVenkatachalam can you try passing this customized model request into the Streamlit app (as a .yaml or .json) file in to the Streamlist app. This is from the api_call_example_exovar.ipynb and I want to see whether the forecast results from the Streamlit app match with that in the exovar example notebook, using the using custom model parameter.
# Onboard NeuralProphet customized model request
np_external_model_request = {
'type': 'neuralprophet',
'metrics': ['rmse', 'mae'],
# 'metrics': ['smape', 'mase'],
'params': {
'lagged_regressors': [
{'index': 0},
{'index': 1},
{'index': 2},
{'index': 3},
{'index': 4},
{'index': 5},
{'index': 6},
{'index': 7},
{'index': 8}
],
"epochs": 5
},
}
# np_external_model_request = None
# Customized model request
model_request = {
'type': 'meta_wa', # 'meta_naive', 'meta_wa'
'scorers': ['mase', 'smape'],
'params': {
'preprocessors': [
{'type': 'dartsimputer', 'params': {'fill': 'auto'}},
{'type': 'simpleimputer', 'params': {'strategy': 'mean'}},
{'type': 'minmaxscaler'},
],
'base_models': [
{'type': 'darts_naive'},
{'type': 'darts_seasonalnaive'},
{'type': 'darts_autotheta'},
# {'type': 'stats_autotheta'},
{'type': 'darts_autoets'},
# {'type': 'stats_autoets'},
{'type': 'darts_autoarima'},
# {'type': 'stats_autoarima'},
# {'type': 'darts_tbats'},
# {'type': 'darts_linearregression'},
{'type': 'darts_lightgbm',
'params': {
'lags': 12,
'lags_future_covariates': [0, 1, 2],
'output_chunk_length': 6,
'verbose': -1
}}, #'lags_past_covariates'
{'type': 'darts_rnn',
'params': {
'model': 'LSTM',
'hidden_dim': 10,
'n_rnn_layers': 3
}},
{'type': 'neuralprophet',
'external_params': np_external_model_request
} # Onboard NeuralProphet external service
],
},
}
Thanks.
Try inputting temp_anom_w_forcing.csv, split into train and test (see attached). Insert train into Train Function with default model parameters, then test into Forecast Functions.
The sybil_forecast are way off compared to the actual values:
Did you create a customized model parameter that can allow you to forecast with better accuracy on this dataset than default?