KimMeen / Time-LLM

[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming Large Language Models"
https://arxiv.org/abs/2310.01728
Apache License 2.0
1.02k stars 179 forks source link

Question about "dec_out" #120

Closed tokaka22 closed 5 days ago

tokaka22 commented 1 week ago
dec_out = self.output_projection(dec_out[:, :, :, -self.patch_nums:])
dec_out = dec_out.permute(0, 2, 1).contiguous()

Why just cut "-self.patch_nums:" in the last dimension of dec_out? Thanks for your attention!

kwuking commented 5 days ago

Thank you for your attention to our work. You can refer to this issue #59.

tokaka22 commented 5 days ago

Thank you for your reply! I was more focused on model robustness before, so I may not be familiar with time series prediction.

  1. Why is it that under the setting of --features M, when reading data, the index selects data from this length, that is, it is possible to read HUFL data to predict HUFL data, and it is also possible to read OT data to predict OT data?
    def __len__(self):
        return (len(self.data_x) - self.seq_len - self.pred_len + 1) * self.enc_in
  2. In addition, I think that in M ​​mode, for ETTh1, the input should be 7-dimensional features, and the output should also be 7-dimensional features; in MS mode, for ETTh1, the input should be 7-dimensional features, and the output should be OT (single dimension). Is this understanding correct?
kwuking commented 5 days ago

Thank you for your reply! I was more focused on model robustness before, so I may not be familiar with time series prediction.

  1. Why is it that under the setting of --features M, when reading data, the index selects data from this length, that is, it is possible to read HUFL data to predict HUFL data, and it is also possible to read OT data to predict OT data?
    def __len__(self):
        return (len(self.data_x) - self.seq_len - self.pred_len + 1) * self.enc_in
  1. In addition, I think that in M ​​mode, for ETTh1, the input should be 7-dimensional features, and the output should also be 7-dimensional features; in MS mode, for ETTh1, the input should be 7-dimensional features, and the output should be OT (single dimension). Is this understanding correct?

Your understanding is correct. However, in order to achieve efficient model transferability, TimeLLM adopts a channel-independent strategy, treating all different channel time series variables as univariate time series. If you are interested in multivariate time series, you can check out my work on "iTransformer: Inverted Transformers Are Effective for Time Series Forecasting".