Closed zezhishao closed 1 year ago
Thanks for your wonderful work, and it is very important to the field of designing time series pretraining models. I noticed that our SIGKDD'22 work STEP is also honored to be included in the survey. ([99])
However, I found two misunderstandings in the description of STEP in the paper. First, the pre-training model designed in STEP is based on the Transformer architecture, instead of CNN. Second, the pre-trained model in STEP is trained based on an unsupervised masked auto-encoder strategy, instead of a supervised forecasting task.
It would be appreciated if the above misunderstandings could be corrected in the next edition of the paper. Thanks again for your wonderful work, which will greatly facilitate the development of time series pretraining models!
Thanks for pointing out this issue. We will fix this error in the next version.
Thanks for your wonderful work, and it is very important to the field of designing time series pretraining models. I noticed that our SIGKDD'22 work STEP is also honored to be included in the survey. ([99])
However, I found two misunderstandings in the description of STEP in the paper. First, the pre-training model designed in STEP is based on the Transformer architecture, instead of CNN. Second, the pre-trained model in STEP is trained based on an unsupervised masked auto-encoder strategy, instead of a supervised forecasting task.
It would be appreciated if the above misunderstandings could be corrected in the next edition of the paper. Thanks again for your wonderful work, which will greatly facilitate the development of time series pretraining models!