time-series-foundation-models / lag-llama

Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
Apache License 2.0
1.2k stars 146 forks source link

Finetune lag-llama using cpu #66

Closed mghiani93 closed 2 months ago

mghiani93 commented 4 months ago

It is possible finetune lag-llama using cpu? If it is possible, how can i do? When i change device to cpu i get this error but i don't think is device related... Thanks for the help and for the amazing work

ashok-arjun commented 4 months ago

Hi, what error do you get?

mghiani93 commented 4 months ago

This is the error: model must be a LightningModule or torch._dynamo.OptimizedModule, got LagLlamaLightningModule

Thanks

ashok-arjun commented 3 months ago

Can you please make a reproducible Colab notebook with your error, and share it here please?

ashok-arjun commented 3 months ago

Hi, we just updated the requirements file with a different version of GluonTS. With this update, do you still get the error?

mghiani93 commented 2 months ago

Hi, this solve the error. Thanks

Hi, we just updated the requirements file with a different version of GluonTS. With this update, do you still get the error?