Closed kkckk1110 closed 3 months ago
If you have CUDA installed, then neuralforecast will automatically leverage your GPU to train the models
Thanks! But I came across an error when training.
@kkckk1110 do you have multiple GPUs? pytorch lightning will try to use all of them by default if you do, can you try setting devices=[0]
in your model constructor?
This issue has been automatically closed because it has been awaiting a response for too long. When you have time to to work with the maintainers to resolve this issue, please post a new comment and it will be re-opened. If the issue has been locked for editing by the time you return to it, please open a new issue and reference this one.
Description
Hello, I am using TFT to train a forecast model. I wonder how can I use GPU to speed up my training process? It seems unclear in the document.
from neuralforecast import NeuralForecast from neuralforecast.losses.pytorch import MQLoss, DistributionLoss, GMM, PMM from neuralforecast.tsdataset import TimeSeriesDataset from neuralforecast.utils import AirPassengers, AirPassengersPanel, AirPassengersStatic import pandas as pd import pytorch_lightning as pl import matplotlib.pyplot as plt
from neuralforecast import NeuralForecast from neuralforecast.models import TFT from neuralforecast.losses.pytorch import MQLoss, DistributionLoss, GMM, PMM from neuralforecast.tsdataset import TimeSeriesDataset from neuralforecast.utils import AirPassengers, AirPassengersPanel, AirPassengersStatic
nf = NeuralForecast( models=[TFT(h=h, input_size=6, hidden_size=20, loss=DistributionLoss(distribution='StudentT', level=[80, 90]), learning_rate=0.005,
stat_exog_list=['airline1'],
) nf.fit(df=train, val_size=12) Y_hat_df = nf.predict(futr_df=Y_test_df)
Link
No response