moment-timeseries-foundation-model / moment

MOMENT: A Family of Open Time-series Foundation Models
https://moment-timeseries-foundation-model.github.io/
MIT License
192 stars 25 forks source link

GPU, single-node multi-GPU, multi-node multi-GPU support for forecast generation #24

Open ryuta-yoshimatsu opened 3 weeks ago

ryuta-yoshimatsu commented 3 weeks ago

It is not clear from the documentation and the sample code, if the forecast generation can be performed on a GPU, multiple GPUs, or multiple GPUs in multiple nodes. If this is the case, please add some descriptions on how to achieve this.

mononitogoswami commented 3 weeks ago

Hi Ryuta, Thanks for your interest in MOMENT! Depending on the batch size, which is typically in the range of 16--64 during fine-tuning, MOMENT can be fine-tuned on a single GPU. For reference, all the tutorials were run on a single NVIDIA A6000 GPU with 48 GB RAM. With that said, we are about to release a tutorial to fine-tune MOMENT using parameter efficient fine-tuning to reduce GPU memory usage, and also enabling multi-GPU training. Stay tuned!

I'll keep this issue open until we release the tutorial!

ryuta-yoshimatsu commented 3 weeks ago

Hi! Thanks for the prompt reply and I'm looking forward to the tutorial for fine-tuning.

My question was more on the inference side rather than the training. Is there a way to assign a GPU to make an inference (i.e. model(context, input_mask=input_mask)) or even distribute an inference across multiple GPUs?

I highly respect the work you are doing!