KimMeen / Time-LLM

[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming Large Language Models"
https://arxiv.org/abs/2310.01728
Apache License 2.0
1.02k stars 179 forks source link

Question about error #101

Closed c0syfeng closed 2 weeks ago

c0syfeng commented 1 month ago

my torch version is 1.12.1,cuda version is 11.3. 发生异常: RuntimeError expected scalar type BFloat16 but found Float File "/home/ljf/ljf/TSwithLLM/Time-LLM-main/layers/Embed.py", line 43, in forward x = self.tokenConv(x).transpose(1, 2) File "/home/ljf/ljf/TSwithLLM/Time-LLM-main/layers/Embed.py", line 185, in forward x = self.value_embedding(x) File "/home/ljf/ljf/TSwithLLM/Time-LLM-main/models/TimeLLM.py", line 244, in forecast enc_out, n_vars = self.patch_embedding(x_enc.to(torch.bfloat16)) File "/home/ljf/ljf/TSwithLLM/Time-LLM-main/models/TimeLLM.py", line 200, in forward dec_out = self.forecast(x_enc, x_mark_enc, x_dec, x_mark_dec) File "/home/ljf/ljf/TSwithLLM/Time-LLM-main/run_main.py", line 213, in outputs = model(batch_x, batch_x_mark, dec_inp, batch_y_mark) RuntimeError: expected scalar type BFloat16 but found Float

there's error. Does any met this error as this?

c0syfeng commented 1 month ago

i had confirm that x'type is BFloat16

kwuking commented 1 month ago

Hi, it seems that the issue is due to a mismatch between the BFloat16 data and the model type. Could you please let me know which script you were running when the problem occurred?

c0syfeng commented 1 month ago

Hi, it seems that the issue is due to a mismatch between the BFloat16 data and the model type. Could you please let me know which script you were running when the problem occurred?

thanks for your reply! When ran the TimeLLM_ETTh1.sh ,The error occured.Specifically, the args "LLM" is set for "GPT2".

kwuking commented 2 weeks ago

Hi, it seems that the issue is due to a mismatch between the BFloat16 data and the model type. Could you please let me know which script you were running when the problem occurred?

thanks for your reply! When ran the TimeLLM_ETTh1.sh ,The error occured.Specifically, the args "LLM" is set for "GPT2".

It looks like the model type and the input data type for GPT-2 are not consistent. You may need to check if you have set the model to use BFloat16 precision.