Closed akepa closed 1 month ago
Hi @akepa, you have a 0. in your data which is most like the issue here.
Thank you very much for the quick response. Indeed, the data was scaled with the default MinMaxScaler, and if I replace the value at 0 by a positive number, the problem disappears.
Is this case supposed to work? If not, should it be specified somewhere in the documentation?
According to the documentation, the NaN
and inf
are replaced by 0
when using MapeLoss
; as soon as the model forecasts nan
, the loss becomes equals to 0. This "zeroing" might also impact the back-propagation and cause some weights in the model to become nan
, leading to nan
predictions (to be confirmed).
It is expected that MAPE will not work with a dataset containing zeros (by definition), I don't think that adding a sentence in its docstring to remind the users to avoid using the MinMaxScaler
in combination with this loss is relevant as the zeros can have various origins/causes.
Describe the bug
When MapeLoss is used as loss function with a TFTModel (loss_fn parameter), the output of the training shows val_loss and train_loss = 0:
Then, when we try to get some predictions with that model, prediction method returns an array of nan values:
There is no issue when any other loss function (e.g MSELoss) is used.
To Reproduce It can be reproduced with the following code. Dataset is also attached: input_example.csv
Expected behavior Prediction output should be an array of float values, and not an array of nans.
System (please complete the following information):
Additional context I've tried to understand where the nan values are coming from. I've modified MapeLoss (https://github.com/unit8co/darts/blob/master/darts/utils/losses.py#L96) to print the values of the two parameters:
It seems that from the second method call onwards, INPT parameter comes with an array of nan.