Closed MansMeg closed 7 months ago
The only thing that concerns me is the drop in speaker identification in 2021+. The rest is about (or exactly) the same as 0.12. For the next release, I will implement more precise date handling, so the graphs should be a closer representation of reality.
large drops in accuracy
What kind of threshold would you use for a unit test? Here is a drop of around 3% in one year.
Yes. Thats what Im concerned about. Great!
I think we might want to capture a drop of 1.5% in any of the years and maybe in total? Also check for an increased absolute difference of 0.03 in the ratio? Does that make sense?
Hi!
It seems like there is a drop, both in the accuracy of speaker identification and the number of MPs in the last release?
We should both fix this but also create a unit test that fails if there are these type of large drops in accuracy.