I've noticed that with my cards, they very quickly reach the maximum Difficulty score of 10. This might be somewhat due to my specific parameters, but I suspect others might experience a similar pattern, perhaps to a lesser extent.
When I mark a card as Again or Hard, it increases the Difficulty score significantly. However, when I mark a card as Good, it barely changes. While this is likely intentional, after getting a card wrong about half a dozen times, its Difficulty score becomes stuck at 10. Even if I subsequently get the card right 100 times, getting it wrong just once pushes it back to the maximum Difficulty.
The issue with this is that sorting cards by Difficulty Ascending becomes less useful because many cards are tied at $D = 10$. As a result, I lose the granularity that this sorting method could provide. I think this could be addressed by using an asymptote instead of a hard cap at 10, while keeping the original equation for Difficulty ($D$) the same.
Currently, when $D$ exceeds 10, I believe the algorithm uses something like:
D\prime = min(D\prime, 10)
This means any information beyond 10 is lost each time it's capped. Instead, we could use an asymptote with a damping factor. For example:
1. Linear Dampening Factor
Let:
\Delta D = D\prime - D
Then update $D\prime$ using:
D\prime = D + \Delta D \cdot (1 - \frac{D}{10})
In this equation, as $D$ approaches 10, the factor $1 - \frac{D}{10}$ decreases, reducing the increment $\Delta D$ and causing $D\prime$ to approach 10 asymptotically.
2. Exponential Damping Factor
Alternatively, you could parameterize the damping factor similar to how it's done in the Stability equation:
D\prime = D + \Delta D \cdot e^{-w (10 - D)}
Here, $w$ would be a new weight and is a positive constant that controls the rate of damping. As $D$ approaches 10, the exponential term $e^{-w (10 - D)}$ decreases, causing $D\prime$ to approach 10 asymptotically.
Implementing an asymptote at 10 could preserve the usefulness of sorting by Difficulty and prevent the loss of information due to capping. It might be worth exploring this adjustment in the next version of FSRS to see if it improves the algorithm.
I've noticed that with my cards, they very quickly reach the maximum Difficulty score of 10. This might be somewhat due to my specific parameters, but I suspect others might experience a similar pattern, perhaps to a lesser extent.
When I mark a card as Again or Hard, it increases the Difficulty score significantly. However, when I mark a card as Good, it barely changes. While this is likely intentional, after getting a card wrong about half a dozen times, its Difficulty score becomes stuck at 10. Even if I subsequently get the card right 100 times, getting it wrong just once pushes it back to the maximum Difficulty.
The issue with this is that sorting cards by Difficulty Ascending becomes less useful because many cards are tied at $D = 10$. As a result, I lose the granularity that this sorting method could provide. I think this could be addressed by using an asymptote instead of a hard cap at 10, while keeping the original equation for Difficulty ($D$) the same.
Currently, when $D$ exceeds 10, I believe the algorithm uses something like:
This means any information beyond 10 is lost each time it's capped. Instead, we could use an asymptote with a damping factor. For example:
1. Linear Dampening Factor
Let:
Then update $D\prime$ using:
In this equation, as $D$ approaches 10, the factor $1 - \frac{D}{10}$ decreases, reducing the increment $\Delta D$ and causing $D\prime$ to approach 10 asymptotically.
2. Exponential Damping Factor
Alternatively, you could parameterize the damping factor similar to how it's done in the Stability equation:
Here, $w$ would be a new weight and is a positive constant that controls the rate of damping. As $D$ approaches 10, the exponential term $e^{-w (10 - D)}$ decreases, causing $D\prime$ to approach 10 asymptotically.
Implementing an asymptote at 10 could preserve the usefulness of sorting by Difficulty and prevent the loss of information due to capping. It might be worth exploring this adjustment in the next version of FSRS to see if it improves the algorithm.