thiswillbeyourgithub / AnnA_Anki_neuronal_Appendix

Using machine learning on your anki collection to enhance the scheduling via semantic clustering and semantic similarity
GNU General Public License v3.0
59 stars 1 forks source link

Optimize new queue? #14

Closed ghost closed 2 years ago

ghost commented 2 years ago

Hey! Hope you're doing fine! :) I was wondering if it's possible to optimize new cards through their content based on what one already learned (is:review). If one learns sentences and they're all pretty similar in context one ends up learning too much of the same... Just an idea! Have a great day

thiswillbeyourgithub commented 2 years ago

Hi!

That's not a bad idea, I've had it in the passed but can't exactly remember why I didn't pursue it :/.

Would you mind trying and reporting back?

Btw the tone of your message made it really nice to read and made me happy, have a great day too!

ghost commented 2 years ago

Cool! Editing the file I noticed this: stopwords_lang=["swedish", "english", "french"],

I'm currently learning Chinese and Swedish, do I need to edit this like that or something else? Also, where do I put the deck:"my_deck" part? This is how my file looks atm:


                 deckname=None,
                 reference_order="order_added",  # any of "lowest_interval", "relative overdueness", "order_added"
                 task="filter_review_cards", # any of "filter_review_cards", "bury_excess_review_cards", "bury_excess_learning_cards"
                 target_deck_size="80%",  # format: 80%, 0.8, "all"
                 stopwords_lang=["swedish", "english", "french"],
                 rated_last_X_days=4,
                 score_adjustment_factor=(1, 0.5),
                 field_mappings="field_mappings.py",
                 acronym_file="acronym_file.py",
                 acronym_list=None,

                 # others:
                 minimum_due=15,
                 highjack_due_query=True,
                 highjack_rated_query=True,
                 log_level=2,  # 0, 1, 2
                 replace_greek=True,
                 keep_OCR=True,
                 tags_to_ignore=None,
                 tags_separator="::",
                 fdeckname_template=None,
                 show_banner=True,
                 skip_print_similar=False,
thiswillbeyourgithub commented 2 years ago

Hi!

ghost commented 2 years ago

Cool, understanding stopwords now! What would I need to set the highjack values to? I read the readme file but there are no possible values

thiswillbeyourgithub commented 2 years ago

Open anki's browser, look for cards using a search query for example deck:"my_deck" is:due -rated:14 flag:1.

This query is the way you ask anki to find cards.

Well highjack arguments by default are set to None to disable them, but they can contain the same kind of queries as string :

Tell me if it's more clear, in which case I'll link the README to this issue.

ghost commented 2 years ago

Oh! Perfectly understood now...hahaha I didn't get it at first. They're now like this:

                 highjack_due_query='deck:"Swedish" is:new',
                 highjack_rated_query='deck:"Swedish" is:review',
thiswillbeyourgithub commented 2 years ago

You might want to add something like rated:14 in the rated_query, depending on the size of your deck.

Don't forget to tell me if it works :) I suggest lowering the score adjustment factor to (1, 0.1) to try and see if it's better.

ghost commented 2 years ago

I've got approximately 15.000 sentence cards which I use to mine the language, so I'll try these out and report back! Thanks so much for your time :)

ghost commented 2 years ago

Working!! Swedish worked flawlessly :) Will report back with my chinese deck.

ghost commented 2 years ago

Hey again! So this error pops up when running the script on my chinese deck. Probably I'm running out of memory because my notebook only has 4gb of ram. I googled and it seems to be a python problem and not your script's. Anyways, maybe this'll happen to other people, so maybe you need to implement something here?


Vectorizing text using TFIDF: 100%|███████████████████████████████████████████████████████████████████████████| 23366/23366 [00:03<00:00, 6952.34it/s]

Reducing dimensions to 100 using SVD... Explained variance ratio after SVD on Tf_idf: 98.2%

Computing distance matrix on all available cores...
Killed
thiswillbeyourgithub commented 2 years ago

Hi,

I implemented the argument "low_power_mode". If you set it to true, the tokenizer will use unigram instead of ngrams between 1 and 5.

This should considerably reduce the number of computation.

It's currently only in the dev branch, if you test it and i works I'll merge it with main.

Another thing you might want to test afterwards please is lowering TFIDF_dim, currently 100 dimensions is enough for 98.2% of the variance, which means you are wayyy overdoing it.

ghost commented 2 years ago

Reporting back! Working splendidly after allocating more swap to the computer :) low power mode and TFIDF_dim=60 resulted in python3 not being killed when analyzing a subdeck with 5k cards. Trying either with TFIDF_dim=100 or 60, with or without low power mode on my main deck of 23k cards caused a kill and it never works. Thanks so much for your help!!

thiswillbeyourgithub commented 2 years ago

Muchos gracias por tu mensaje! (btw, the name of this software is from an argentinian person :) )

I think it's better to use low_power_mode than to reduce the number of dim drastically.

That being said, the number of dim can and should be reduced anyway if you see it's keeping more than say 70% of the variance IMO.

ghost commented 2 years ago

Oh! That's so cool :) Okay, I'll write down 70%... luckily it's working and putting out no less than 95% with dim 60 and it's speeding up the process a lot :D