time-series-foundation-models / lag-llama

Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
Apache License 2.0
1.09k stars 121 forks source link

cann't run in macbook m3 env #13

Closed loveoftheai closed 2 months ago

loveoftheai commented 4 months ago
ckpt = torch.load("lag-llama.ckpt", map_location=torch.device('mps'))
estimator_args = ckpt["hyper_parameters"]["model_kwargs"]

code above is ok but then run following code. occur a error:

estimator = LagLlamaEstimator(
    ckpt_path="lag-llama.ckpt",
    prediction_length=prediction_length,
    context_length=context_length,

    # estimator args
    input_size=estimator_args["input_size"],
    n_layer=estimator_args["n_layer"],
    n_embd_per_head=estimator_args["n_embd_per_head"],
    n_head=estimator_args["n_head"],
    scaling=estimator_args["scaling"],
    time_feat=estimator_args["time_feat"],
)

lightning_module = estimator.create_lightning_module()
transformation = estimator.create_transformation()
predictor = estimator.create_predictor(transformation, lightning_module)

`RuntimeError Traceback (most recent call last) Cell In[31], line 15 1 estimator = LagLlamaEstimator( 2 ckpt_path="lag-llama.ckpt", 3 prediction_length=prediction_length, (...) 12 time_feat=estimator_args["time_feat"], 13 ) ---> 15 lightning_module = estimator.create_lightning_module() 16 transformation = estimator.create_transformation() 17 predictor = estimator.create_predictor(transformation, lightning_module)

File ~/private/time-series-analysis/lag-llama/lag_llama/gluon/estimator.py:280, in LagLlamaEstimator.create_lightning_module(self, use_kv_cache) 264 model_kwargs = { 265 "input_size": self.input_size, 266 "context_length": self.context_length, (...) 277 "dropout": self.dropout, 278 } 279 if self.ckpt_path is not None: --> 280 return LagLlamaLightningModule.load_from_checkpoint( 281 checkpoint_path=self.ckpt_path, 282 loss=self.loss, 283 lr=self.lr, ... 262 'to map your storages to the CPU.') 263 device_count = torch.cuda.device_count() 264 if device >= device_count:

RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.`

and i modify code still error

ckpt = torch.load("lag-llama.ckpt", map_location=torch.device('cpu'))
estimator_args = ckpt["hyper_parameters"]["model_kwargs"]

RuntimeError Traceback (most recent call last) Cell In[33], line 15 1 estimator = LagLlamaEstimator( 2 ckpt_path="lag-llama.ckpt", 3 prediction_length=prediction_length, (...) 12 time_feat=estimator_args["time_feat"], 13 ) ---> 15 lightning_module = estimator.create_lightning_module() 16 transformation = estimator.create_transformation() 17 predictor = estimator.create_predictor(transformation, lightning_module)

File ~/private/time-series-analysis/lag-llama/lag_llama/gluon/estimator.py:280, in LagLlamaEstimator.create_lightning_module(self, use_kv_cache) 264 model_kwargs = { 265 "input_size": self.input_size, 266 "context_length": self.context_length, (...) 277 "dropout": self.dropout, 278 } 279 if self.ckpt_path is not None: --> 280 return LagLlamaLightningModule.load_from_checkpoint( 281 checkpoint_path=self.ckpt_path, 282 loss=self.loss, 283 lr=self.lr, ... 262 'to map your storages to the CPU.') 263 device_count = torch.cuda.device_count() 264 if device >= device_count:

RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

khaizaki-accesstel commented 4 months ago

Facing the same problem here. I cannot seem to load the model for a cpu-based runtime. Replicable on both M1 Macbook and Google Colab CPU-based runtime.

ashok-arjun commented 2 months ago

Hi, you can try it now. We now have support to pass the device to the estimator object: https://github.com/time-series-foundation-models/lag-llama/blob/1dbe107b6933332b2fbc9a46eda411c793573492/lag_llama/gluon/estimator.py#L144

You can use device=torch.device("cpu") for CPU.