Open borchero opened 3 years ago
One idea to address this: make num_samples
default to None
in make_evaluation_predictions
, which results in num_samples=None
being passed to Predictor.predict
, in which case the model produces its default number of samples
Since multiple samples are essentially useless, I would rather adapt the strategy of the MQCNNEstimator
, for example. Since it directly outputs quantiles, samples are not required and warning is logged that num_samples
is unused. We have a very similar situation for N-BEATS -- only that we get point forecasts.
@borchero MQCNN predictors do that because they rely on the QuantileForecastGenerator. Depending on what loss N-BEATS optimizes, outputting a QuantileForecast object may or may not be a good idea: if MAPE is being optimized, then outputting the P50 prediction makes sense. If other losses are used (MASE and sMAPE are allowed if I recall correctly) then the prediction does not represent the median or any other quantile, strictly speaking, so using QuantileForecast may be misleading.
No, of course. It might return a PointForecast
though (which might be a SampleForecast
with a single sample).
Description
When using
gluonts.evaluation.backtest.make_evaluation_predictions
with a predictor obtained fromNBEATSEstimator
, forecasts of typeSampleForecast
are returned (with as many samples as specified vianum_samples
, which defaults to 100).For N-BEATS, this results in
num_samples
samples from the network. Since N-BEATS inference is non-probabilistic, however, this only duplicates work and makes predictionsnum_samples
times slower than they should be.To mitigate this issue, the predictor obtained from
NBEATSEstimator
should fix the number of samples to 1.