qianlima-lab / time-series-ptms

This is an official implementation code for paper "A Survey on Time-Series Pre-Trained Models" (TKDE-24).
https://arxiv.org/pdf/2305.10716v2
147 stars 10 forks source link

Some misunderstandings about SIGKDD'22 STEP in the paper #2

Closed zezhishao closed 1 year ago

zezhishao commented 1 year ago

Thanks for your wonderful work, and it is very important to the field of designing time series pretraining models. I noticed that our SIGKDD'22 work STEP is also honored to be included in the survey. ([99])

However, I found two misunderstandings in the description of STEP in the paper. First, the pre-training model designed in STEP is based on the Transformer architecture, instead of CNN. Second, the pre-trained model in STEP is trained based on an unsupervised masked auto-encoder strategy, instead of a supervised forecasting task.

It would be appreciated if the above misunderstandings could be corrected in the next edition of the paper. Thanks again for your wonderful work, which will greatly facilitate the development of time series pretraining models!

ZLiu21 commented 1 year ago

Thanks for your wonderful work, and it is very important to the field of designing time series pretraining models. I noticed that our SIGKDD'22 work STEP is also honored to be included in the survey. ([99])

However, I found two misunderstandings in the description of STEP in the paper. First, the pre-training model designed in STEP is based on the Transformer architecture, instead of CNN. Second, the pre-trained model in STEP is trained based on an unsupervised masked auto-encoder strategy, instead of a supervised forecasting task.

It would be appreciated if the above misunderstandings could be corrected in the next edition of the paper. Thanks again for your wonderful work, which will greatly facilitate the development of time series pretraining models!

Thanks for pointing out this issue. We will fix this error in the next version.