amazon-science / unconditional-time-series-diffusion

Official PyTorch implementation of TSDiff models presented in the NeurIPS 2023 paper "Predict, Refine, Synthesize: Self-Guiding Diffusion Models for Probabilistic Time Series Forecasting"
Apache License 2.0
101 stars 21 forks source link

GPU usage about tstr experiment #4

Closed zzkkzz closed 5 months ago

zzkkzz commented 7 months ago

In /bin/tstr_experiment.py 182, both DeepAREstimator and TransformerEstimator use cpu as default device, maybe add a param like"trainer=Trainer(ctx=mx.gpu())" to accelerate the training and evaluation process by gpu.

abdulfatir commented 7 months ago

@zzkkzz Indeed one can use GPU here but in the experiment setups considered here, the improvements in speed due to GPU are marginal, if any. This is especially true for DeepAR.