Closed Conearth closed 2 years ago
Thanks for your interest.
We have especially clarified this in the paper (Section 4 Implementation details).
It is notable that the adjustment operation is widely used in previous papers. Thus, we adopt this for a fair comparison with other methods.
Thank you very much for your attention. In your experiment, do the reconstruction based methods compared in the article use the same adjustment strategy?
Sure, all the comparing methods adopt this adjustment strategy for evaluation.
That's good. Thx.
Hi, this is an amazing job. Here I come across a small problem. On MSL dataset, the model performed good, looks like:
======================TEST MODE====================== Threshold : 0.0017330783803481142 pred: (73700,) gt: (73700,) pred: (73700,) gt: (73700,) Accuracy : 0.9853, Precision : 0.9161, Recall : 0.9473, F-score : 0.9314
But after I annotated the "detection adjustment" code, the score was poorly, looks like:
======================TEST MODE====================== Threshold : 0.0017330783803481142 pred: (73700,) gt: (73700,) pred: (73700,) gt: (73700,) Accuracy : 0.8866, Precision : 0.1120, Recall : 0.0109, F-score : 0.0199
And I'm sure only the "detection adjustment" code was annotated.
Can you help me out of this problem? thx.