Closed dwolffram closed 2 months ago
Hi @dwolffram, a couple of things:
LightGBMModel
or CatBoostModel
over XGBModel
for quantile regression, as they usually perform better (see second and third images).XGBModel output
Code (xgb trained to predict the next 6 months)
import matplotlib.pyplot as plt
from darts import concatenate
from darts.datasets import AirPassengersDataset
from darts.models import XGBModel
series = AirPassengersDataset().load()
target_end = 60
validation_start = 60
QUANTILES = [0.025, 0.25, 0.5, 0.75, 0.975]
xgb = XGBModel(
lags=12,
output_chunk_length=6,
use_static_covariates=True,
likelihood="quantile",
quantiles=QUANTILES
)
xgb.fit(series)
hfc = xgb.historical_forecasts(
series=series,
start=validation_start,
forecast_horizon=6,
stride=6,
last_points_only=False,
retrain=False,
verbose=True,
num_samples=200
)
hfc = concatenate(hfc, axis=0)
series.plot()
hfc.plot()
plt.show()
LightGBMModel output
CatBoostModel output
Thanks a lot, I was indeed still using an older version of xgboost, and updating it solved the problem! There is no trend in my dataset, so that shouldn't be an issue but thanks for the nice examples.
Hi there,
are there any known issues with forecasting multiple horizons and multiple quantiles with XGBoost? In my use case, I'm forecasting 1-4 weeks ahead, and somehow the forecasts are identical across all horizons.![image](https://github.com/unit8co/darts/assets/25206417/669185de-b927-449d-8418-3fccf12f9026)
I saw this in the latest release notes, is this maybe related? Perhaps the quantiles are computed across all horizons?
Or is there something wrong with my code? From my understanding, there should be one estimator for each quantile-horizon-combination, so it's very unlikely they are exactly the same, right?
Thanks a lot for any input!