Closed GraesonB closed 1 year ago
changed the API to schedule the card after a rating has been assigned instead of scheduling all possible outcomes and having the API user select from the calculated outcomes.
But in many spaced repetition software, they should show all intervals for all ratings before user press the rating button.
changed the API to schedule the card after a rating has been assigned instead of scheduling all possible outcomes and having the API user select from the calculated outcomes.
But in many spaced repetition software, they should show all intervals for all ratings before user press the rating button.
Oh I see, that makes sense. That would still be supported here, the user of the API would just need to do the following:
from fsrs import FSRS
from models import Card, Rating
fsrs = FSRS()
card = Card()
card_again = fsrs.review(card, Rating.Again)
card_hard = fsrs.review(card, Rating.Hard)
card_good = fsrs.review(card, Rating.Good)
card_easy = fsrs.review(card, Rating.Easy)
Then figure out whatever logic is required to hook the results up to the buttons.
Could you implement this test for state in the review logs?
Alright, made the changes you've requested. review method updated to return the card and a log in a tuple:
card, log = fsrs.review(card, Rating.Good)
Both tests are passing
Thanks for your contribution. One another problem: how could I run the test script?
Ah right! I was using pytest to run the tests, but that should probably be added to the dev dependencies for the project. I'll add it now
If you're not familiar with pytest, you can run the following the root of the project:
pip3 install pytest
python3 -m pytest
and it will look for any files that start with "test" in the project and run any functions that start with "test". I reckon you already know that though because thats how the original test was setup.
Okay, I've added it to the dev dependencies. To install the dev dependencies you can run:
pip3 install '.[test]'
This will install any depencies that belong to the test group of optional depencies.
Then you can run:
python3 -m pytest
I update the test and it couldn't pass the test_state
.
Good stuff :-). Is this something you want to merge? Or do you still want to keep a consistent API across all of your FSRS libs? I'm working on a rust version of this API too
Oh, my fault. The current version couldn't pass the test. Could you fix it? The log doesn't behave as expected.
Or do you still want to keep a consistent API across all of your FSRS libs?
At least the users of this lib could use the refactor version to implement the same feature.
Oh, my fault. The current version couldn't pass the test. Could you fix it? The log doesn't behave as expected.
Yeah, no problem. It was passing on my machine originally though. What would be the expected behaviour for the log? it is saving the state after the review, do you want it saving the state from before the review?
it is saving the state after the review, do you want it saving the state from before the review?
It should save the state of the card before the review, as the Anki did.
Wait. I recall the reason why the lib scheduled all possible outcomes. Because it should ensure the easy interval is larger than good interval and good interval should be larger than hard interval. Besides, I plan to implement the fuzz feature, which also needs to schedule all possible outcomes. Sorry for wasting your time.
No worries at all! There are scenarios where the algo outputs shorter intervals for easy vs. hard etc.? How can I recreate an instance like that?
What exactly is the fuzz feature?
What exactly is the fuzz feature?
Is it implemented in a different library? Maybe I'm misunderstanding, but doesn't the fuzz feature have more to do with knowledge of other cards and not knowledge of other possible ratings?
Is it implemented in a different library? Maybe I'm misunderstanding, but doesn't the fuzz feature have more to do with knowledge of other cards and not knowledge of other possible ratings?
Fuzz feature is not related to other cards. Fuzz feature is used to random the interval. It should keep the same proportion and direction of random in all ratings. The pull request is a good example:
By the way, I have considered it a lot. Many developers didn't have much knowledge in spaced repetition, so I think it would be our responsibility to implement fuzz feature.
I've restructered the code to be a little more conventional and changed the API to schedule the card after a rating has been assigned instead of scheduling all possible outcomes and having the API user select from the calculated outcomes. So use of the API now looks like this:
Things I'd like to add in the future:
Tests are passing, please let me know your thoughts :-)