Closed Expertium closed 9 months ago
Why not use the variance of p? Cheating algorithms tend to provide a small range of predictions, so the variance would be small, too.
I think my measure is more intuitive. You can add both if you want to, you'll have to re-run the benchmark anyway.
I find out a typical cheating case:
When the algorithm is measured by itself, FSRS-4.5 is worse than DASH[ACT-R]. But when they are measured by each other, FSRS-4.5 is better than DASH[ACT-R]. Anyway, the log loss is uncheatable.
Here are their calibration graphs.
FSRS-4.5:
DASH[ACT-R]:
Yes, but we have to do this for all 20 000 collections and compare averages. We can't deicde whether an algorithm is cheating or not based on one collection. I would suggest implementing both my original suggestion and your stdev of p.
The main problem is we don't know the real distribution of retrievability. The ideas of you and me both assume that the real distribution is more flat than the distribution predicted by a cheat algorithm.
True. Well, do you think DASH[ACT-R] should be included in the table now? EDIT: I think it's reasonable to assume that the true distribution is a beta distribution.
The thing is, beta distribution can look like what we're seeing with ACT-R, depending on alpha and beta. We could also add UM (as the third metric), with FSRS-4.5 as comparison, but that would be difficult to interpret.
Btw, don't forget about https://github.com/open-spaced-repetition/fsrs-benchmark/pull/55
Well, do you think DASH[ACT-R] should be included in the table now?
If we still rank models by RMSE(bins), I tend not to include it. If we rank models by log loss, I will include it.
Hmmm. Ok, let's sort by log-loss then.
I have an idea how to measure the degree of "cheatiness" of an algorithm. 1) Do the same procedure that you do for plotting the calibration graph. 2) Record the number of values in the densest bin, aka the highest bar. Example:
3) Divide it by the total number of reviews. For a cheating algorithm, this will be 100% since there is only one bin, so 100% of reviews fall into that bin. 4) Do this for every user for a given algorithm. 5) Calculate the (unweighted) average.
From a theoretical point of view, the issue is that the cutoff will be arbitrary. If the average is 90%, meaning that on average 90% of predicted R values fall within the same bin, is it cheating? What about 80%? Or 50%? From a practical point of view, this will require re-running every single algorithm since this information cannot be obtained from .json result files right now. At the very least, you will have to re-run FSRS-4.5, ACT-R and DASH[ACT-R], since we are sure that FSRS-4.5 isn't cheating, and ACT-R algorithms are the main suspects. But of course, to get a better idea of what values of this metric are good and what values are bad, you need to re-run the entire benchmark.
Also, this is not intended to be included in the readme. It's for our internal testing.