This takes the times produced by the cv_times function and extracts the forecasts that correspond to those times from all the forecasts that were produced. Ideally we should make sure that we don't produce windows that don't exist in the first place to avoid also wasting compute when running inference on windows full of zeros.
Also fixes some failing tests by increasing the tolerance and reduces the max_steps in the BiTCN model to reduce the CI time.
The
cross_validation
method always produces the same number of windows for each serie, regardless of its size, so we may end up with times that the original serie doesn't have. https://github.com/Nixtla/neuralforecast/blob/0c1a7607ce31aae6db8f53a583c1238e56f821e9/neuralforecast/core.py#L861-L865This conflicts with the definition of the cv_times function, which only keeps the windows that could be produced by a serie when performing cross validation, i.e. if a serie has 51 samples and we use
window_size=10, step_size=10
, then it can produce at most 5 windows (where the first window has 1 training sample) https://github.com/Nixtla/utilsforecast/blob/fe357c49a3b3007256eb54bf586656dd5f3de2f6/utilsforecast/processing.py#L489So we could end up with dataframes that had a different number of rows and perform a horizontal stack https://github.com/Nixtla/neuralforecast/blob/0c1a7607ce31aae6db8f53a583c1238e56f821e9/neuralforecast/core.py#L890 which would produce a lot of rows with null values and place the forecasts in the wrong places.
This takes the times produced by the cv_times function and extracts the forecasts that correspond to those times from all the forecasts that were produced. Ideally we should make sure that we don't produce windows that don't exist in the first place to avoid also wasting compute when running inference on windows full of zeros.
Also fixes some failing tests by increasing the tolerance and reduces the max_steps in the BiTCN model to reduce the CI time.