Open HowardZJU opened 2 weeks ago
Hi there 👋,
Thank you so much for your attention to PyPOTS! You can follow me on GitHub to receive the latest news of PyPOTS. If you find our research is helpful to your work, please star⭐️ this repository. Your star is your recognition, which can help more people notice PyPOTS and grow PyPOTS community. It matters and is definitely a kind of contribution to the community.
I have received your message and will respond ASAP. Thank you for your patience! 😃
Best, Wenjie
Hi @HowardZJU, thanks for raising the discussion here.
You ask good questions and provide helpful insights here. We appreciate that. If you also work with POTS (partially-observed time series) data, we sincerely invite you to join our community and build PyPOTS better together ;-)
Dear Authors,
Thank you for your invaluable contributions to this repository. I am currently exploring the field of time series imputation and have encountered some aspects regarding the evaluation protocols that I believe could benefit from further discussion.
Evaluation Comparisons: The evaluation process raises somewhat questions about fairness and consistency across different methods. We compare for instance the Transformer and mean imputer. While the Transformer model is assessed using test data, the approach for evaluating a mean imputer remains unclear. Should the mean imputer also have access to the test data since the non-missing data in the test data should also be available in model serving? There are two options:
In view of these points, I suggest the following:
I look forward to your insights and any suggestions you might have on aligning the evaluation framework with real-world imputation tasks.
Best regards,
Hao