gzerveas / mvts_transformer

Multivariate Time Series Transformer, public version
MIT License
718 stars 169 forks source link

Discussion about "Extracted representations" mentioned in the "FUTURE WORK" #39

Closed Guanyunlph closed 10 months ago

Guanyunlph commented 1 year ago

I am very interested in the "Extracted representations" mentioned in the "FUTURE WORK" , but I still have some areas of confusion and would like to seek your opinion.

  1. The aggregated representation Z of Transformer can evaluate the similarity of time series. Is it to directly aggregate the representation Z of (w, d), where w is the number of time points and d is the output feature dimension of Transformer, into a (1, d) dimension feature? In this case, does each dimension represent the similarity between this sequence and all other sequences? But we input m time series (dimension), and we only get d similarity values. It seems that it can't represent the similarity between the original sequences. Should this Z of (w, d) be passed through a fully connected layer to make it equal to the dimension of original input feature (m-dim),then we aggregated? Or should we aggregate the pre-trained result x (with an m dimension)? Are the similarities between original multidimensional time series (m dimension) evaluated only when d=m? Secondly, with regard to the "clustering and visualization" section, I am not quite sure how to proceed. Should we visualize or cluster the aggregated (1, d) or (1, m) features, or should we operate on (w, d)? Do you have any specific suggestions or any relevant works that can be referenced for clustering and visualization?

  2. For each time step, its representation is independently processed. We can assign greater weight to certain time points. As we all known, Transformer can learn the relationship between time points naturally. If we manually apply weights to certain time points, does it mean we are adding a prior constraint? And where should this constraint be added, in the model part or the data processing part? Do you have any specific suggestions?

Thank you very much! I know my questions may be a bit long and complex, but I will try to learn how to ask better questions. Thanks again.