Closed Expertium closed 2 months ago
I think that DR above 0.96 doesn't make much sense because the returns are diminishing.
In this image, see the drastic increase in slope above DR = 0.96. (There are occasional high slopes before DR = 0.96 too but after DR = 0.96, most of the values are too high.)
Data: https://github.com/open-spaced-repetition/fsrs4anki/issues/686#issuecomment-2336648397
The data is based on default parameters.
Well, it is OK if you think that some people can have such a high value of MRR. But, I am not really convinced. Nonetheless, let's forget about this for now. We can always decrease the value if someone complains.
We can change it slightly to 0.97 instead of 0.98
@L-M-Sherlock do you think we should revert this (or set R_MAX to 0.96 or 0.97)? It seems like the idea that previous values were underestimates was based on the wrong data Btw, here's the most recent graph, but with workload divided by retention. This is with loss_aversion = 1.
https://github.com/open-spaced-repetition/fsrs4anki/issues/686#issuecomment-2335482240 It appears that our previous estimates of optimal retention were underestimates. I think it makes sense to expand the range. Note that here it says "workload", but it's not just workload, it's workload/knowledge