Thinklab-SJTU / Crossformer

Official implementation of our ICLR 2023 paper "Crossformer: Transformer Utilizing Cross-Dimension Dependency for Multivariate Time Series Forecasting"
Apache License 2.0
476 stars 84 forks source link

关于SegMerging的请教 #11

Closed joeySJTU closed 1 year ago

joeySJTU commented 1 year ago

code in cross_encoder.py

seg_to_merge = [] for i in range(self.win_size): seg_to_merge.append(x[:, :, i::self.win_size, :]) x = torch.cat(seg_to_merge, -1) # [B, ts_d, seg_num/win_size, win_size*d_model]

x = self.norm(x) x = self.linear_trans(x)

我想了解一下x为什么要通过seg_to_merge,在seg_num维度变成类似 [0, 2, 4, 1, 3, 5]的排布

YunhaoZhang-Mars commented 1 year ago

在你举的例子中,seg_to_merge中保存了两个向量[0,2,4]和[1,3,5];经过x = torch.cat(seg_to_merge, -1)后,两个向量被融合成[01,23,45]的形式;最后通过linear层完成merge。

joeySJTU commented 1 year ago

谢谢