Closed mghiani93 closed 2 months ago
Hi, what error do you get?
This is the error:
model
must be a LightningModule
or torch._dynamo.OptimizedModule
, got LagLlamaLightningModule
Thanks
Can you please make a reproducible Colab notebook with your error, and share it here please?
Hi, we just updated the requirements file with a different version of GluonTS. With this update, do you still get the error?
Hi, this solve the error. Thanks
Hi, we just updated the requirements file with a different version of GluonTS. With this update, do you still get the error?
It is possible finetune lag-llama using cpu? If it is possible, how can i do? When i change device to cpu i get this error but i don't think is device related... Thanks for the help and for the amazing work