Nixtla / neuralforecast

Scalable and user friendly neural :brain: forecasting algorithms.
https://nixtlaverse.nixtla.io/neuralforecast
Apache License 2.0
3k stars 346 forks source link

[BUG] Macbook M2 Apple Sillicon, pytorch operator is not currently implemented for the MPS device. #620

Closed kdgutier closed 1 year ago

kdgutier commented 1 year ago

The new Macbooks have Metal Performance Shaders (MPS) backend for GPU training acceleration. PyTorch is yet to use fully the capabilities of the hardware, and some functionalities are not available. A solution is to set the PYTORCH_ENABLE_MPS_FALLBACK=1 environment variable to execute the code on the CPU if Mac GPU's code is not available.

dataset, *_ = TimeSeriesDataset.from_df(df = Y_train_df)
model = NHITS(h=24,
              input_size=24*2,
              max_steps=1,
              windows_batch_size=None, 
              n_freq_downsample=[12,4,1], 
              pooling_mode='MaxPool1d')
model.fit(dataset=dataset)
y_hat = model.predict(dataset=dataset)

NotImplementedError: The operator 'aten::upsample_linear1d.out' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.

The following line in the terminal with the respective neuralforecast environment activated solves the issue.

(neuralforecast) user@path % export PYTORCH_ENABLE_MPS_FALLBACK=1

In a jupyter notebook the same effect is achieved with the following line:

import os
os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"
kdgutier commented 1 year ago

It is important to notice that the numerical stability of MPS operators is not guaranteed. And some normal operations can fail to converge.

It is recommended to use a GPU on a Linux machine and debug on the Macbook M2 machine.