Closed wyhzunzun123123 closed 4 months ago
Thank you for bringing this to our attention.
We are aware of the issue. Currently, this historical bug does not significantly impact the conclusions of the paper, as both the baseline results and the error introduced by this bug are relatively minor compared to the baseline performance differences.
We will address this issue in the final version of the paper and in future work. However, this will take some time, and we appreciate your understanding.
Best regards, and thank you for your suggestion.
Dear authors:
I am impressed by this amazing work! Maybe you can show in the experimental section of your final version that drop last is false during testing and then cite "TFB: Towards Comprehensive and Fair Benchmarking of Time Series Forecasting Methods". In this way, your work will be more perfect.
Thank you. We will cite it and update the final paper.
We have fixed this bug.
Dear authors:
I am impressed by this amazing work! But when I am reading another paper in ICLR 24' about time series forecasting (FITS: Modeling Time Series with Parameters) I noticed the authors reported this bug in their repo (https://github.com/vewoxic/fits).The benchmark paper of PVLDB2024 "TFB: Towards Comprehensive and Fair Benchmarking of Time Series Forecasting Methods" also pointed out that the droplast operation should be abandoned during the testing process, otherwise it will cause unfair comparisons.
In short, this bug, originitated from Informer (AAAI 21'), drops some test samples in the last test batch thus resulting in inaccurate results measured. (Please refer to FITS' repo for a detailed description.) Could you please confirm whether your results are affected by this piece of code? If so, could you please modify it and correct the results in the tables in your assey?
Thanks!