Closed jed-ho closed 1 month ago
Hi @jed-ho - thank you for using Orion!
After running benchmark.py
you can use get_f1_scores
in results.py
to get the overview f1 scores and write_results
to obtain the leaderboard.xlsx
.
In Orion, we publish the benchmark with every release to help navigate the changes the happen due to external factors such as dependency changes and package updates. Your results should be consistent with the latest published benchmark results.
Hope this answers your questions!
make install-develop
Description
I am attempting to reproduce the results of the research paper
AER: Auto-Encoder with Regression for Time Series Anomaly Detection
. I ran benchmark.py and obtained the results. However, the results show a significant discrepancy when compared to the F1 scores reported in the research paper. Could you please help investigate this discrepancy? Any guidance on whether I might be missing a step or misinterpreting the results would be greatly appreciated.What I Did
Orion/benchmark/results/0.6.0.csv
; the values in my results do match those in the file.leaderboard.xlsx
. There are minor differences between my results and theleaderboard.xlsx
.