Closed scottlaurent closed 4 months ago
It looks like we just forgot to update the weights in our unit test to 4.5.
We should probably fix that :)
The unit test is just used to check whether the function works as usual after update.
I think LM is right. The tests pass which confirm the code works. And the weights are the same default starters for 4.5. The difference is just that its doesn't handle 4.5 interval adjustments the same for 'again' (1). This can be confirmed by just pasting the same starting params into huggingface and cycling the same scoring sets.
its doesn't handle 4.5 interval adjustments the same for 'again' (1)
The previewer treat every review as long-term review, that causes the difference. The two zeros in the unit test mean the card is in relearning stage.
Yes, but even if it was showing a relearning, the intervals following the relearning diverge quite a bit. By as much as 50%. I was really curious what the numbers are supposed to be and still not able to determine the 'correct' intervals for ratings that come after one or two 'agains' -- we have been messing with using this library in a small personal project and were trying to come up with tests as well based, and started using hugging face to give the unit test expected responses and in cycles where there are a few 1's in the middle, the intervals following the 1's are very different (not even close) and it got us wondering what correct values should be.
This is a super helpful library, by the way.
If you use the same parameters and extract the long-term reviews from the unit test, you will get a consistent result from the previewer.
parameters:
1.14, 1.01, 5.44, 14.67, 5.3024, 1.5662, 1.2503, 0.0028, 1.5489, 0.1763, 0.9953, 2.7473, 0.0179, 0.3105, 0.3976, 0.0, 2.0902
ratings
3,3,3,3,3,1,3,3,3,3,3
Result:
interval history: 0d,5d,16d,1.4m,3.5m,7.9m,12d,25d,1.6m,2.8m,4.9m,8.1m
@scottlaurent are we good to close this issue?
Yes. Thanks a lot for the clarity.
On Fri, Jul 12, 2024 at 11:37 AM Joshua Hamilton @.***> wrote:
@scottlaurent https://github.com/scottlaurent are we good to close this issue?
— Reply to this email directly, view it on GitHub https://github.com/open-spaced-repetition/py-fsrs/issues/41#issuecomment-2225938913, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEDJH2NZWJX6PADRLAWIPMDZMAA4HAVCNFSM6AAAAABKWBTJHCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMRVHEZTQOJRGM . You are receiving this because you were mentioned.Message ID: @.***>
https://huggingface.co/spaces/open-spaced-repetition/fsrs4anki_previewer
using these ratings: 3,3,3,3,3,3,1,1,3,3,3,3,3
returns this: interval history: 0d,4d,15d,1.6m,4.9m,1.1y,2.7y,18d,3d,7d,15d,1.0m,2.0m,3.8m
However, in your unit tests it shows 0, 5, 16, 43, 106, 236, 0, 0, 12, 25, 47, 85, 147
This appears due to the 1 causing an immediate 5 minute repeat, however the huggingface appears to have a 1 cause a massive decrease in interval. I can't actually determine which one is in error as I am unclear of what the actual expected behavior is.
So this may not be an error in your library, but theirs. Any clarify would be super helpful here.