thuml / iTransformer

Official implementation for "iTransformer: Inverted Transformers Are Effective for Time Series Forecasting" (ICLR 2024 Spotlight), https://openreview.net/forum?id=JePfAI8fah
https://arxiv.org/abs/2310.06625
MIT License
901 stars 168 forks source link

Encoder-Decoder Architecture Issues #88

Closed YEKAI-2022 closed 2 weeks ago

YEKAI-2022 commented 2 weeks ago

Hello, author, I am very interested in your work. This is a great discovery. I would like to ask why this paper only uses one Encoder architecture to outperform other Encoder-Decoder architectures. This is what I am confused about. I look forward to your reply.

YEKAI-2022 commented 2 weeks ago

In theory, shouldn't the features extracted using the Encoder-Decoder architecture be better? I look forward to your reply. Thanks again.

WenWeiTHU commented 2 weeks ago

Hi, I think it depends on the problem. Maybe this paper would be helpful.

YEKAI-2022 commented 2 weeks ago

Okay, thank you very much

Yong Liu @.***> 于2024年6月14日周五 15:29写道:

Hi, I think it depends on the problem. Maybe this paper https://arxiv.org/abs/2204.05832 would be helpful.

— Reply to this email directly, view it on GitHub https://github.com/thuml/iTransformer/issues/88#issuecomment-2167408966, or unsubscribe https://github.com/notifications/unsubscribe-auth/A7D5MMOS5PI4ZXJ5HDGKR5LZHKLXDAVCNFSM6AAAAABJJZAOSWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNRXGQYDQOJWGY . You are receiving this because you authored the thread.Message ID: @.***>