Generate interactive flashcards from your notes using models from OpenAI (ChatGPT), Google (Gemini), Ollama (local LLMs), and more. Or manually create your own to use with the quiz UI.
However, I am currently learning on an AI certification and it seems to me that I have a large number of cases in which the answer provided within the quiz file are correct, but while doing the quiz are shown as wrong.
The quiz I am taking is based on 233 files in 47 folders.
Example:
I have chosen in above question "True" (which is correct), but it shows that my answer was wrong
The text in the notes is as follows (two different notes):
Tree Pruning:
To prevent overfitting, Decision Tree models can be pruned by removing branches or nodes that do not significantly improve the model's performance.
Pruning helps to simplify the tree structure and improve the model's generalization to new, unseen data.
Pruning: For decision trees, pruning can limit the depth of the tree to prevent overfitting by removing overly specific branches.
When I look into the quiz file it is correctly shown as true:
[!question] Deep Learning models require large amounts of data to find complex patterns.
[!success]- Answer
True
Example:
I have chosen in above question "True" (which is correct), but it shows that my answer was wrong
The related text in the notes is as follows:
Large Language Models (LLMs) are Deep Learning Foundation Models, a class of artificial intelligence models that have been [[Pre-training|trained]] on vast amounts of text data, enabling them to understand and generate human-like language. These models are characterized by their large size, typically containing hundreds of billions of features, which allows them to capture complex patterns and relationships within language.
When I look into the quiz file it is correctly shown as true:
[!question] Deep Learning models require large amounts of data to find complex patterns.
[!success]- Answer
True
Example:
I have chosen in above question "True" (which is correct), but it shows that my answer was wrong
The text in the notes is as follows (this actually appears in many notes):
RAG stands for "Retrieval Augmented Generation",
When I look into the quiz file it is correctly shown as true:
[!question] RAG stands for Retrieval Augmented Generation.
[!success]- Answer
True
So even correcting the quiz does not help, as the quiz file is correct.
So far I have seen this problem only with True/False questions.
Great catch! You're not doing anything wrong, I just forgot to account for capitalization when checking the answer. I've fixed the problem and will put out a new release later today.
Hi,
first of all - great plugin, thank you!
However, I am currently learning on an AI certification and it seems to me that I have a large number of cases in which the answer provided within the quiz file are correct, but while doing the quiz are shown as wrong. The quiz I am taking is based on 233 files in 47 folders.
I have chosen in above question "True" (which is correct), but it shows that my answer was wrong The text in the notes is as follows (two different notes):
When I look into the quiz file it is correctly shown as true:
I have chosen in above question "True" (which is correct), but it shows that my answer was wrong The related text in the notes is as follows:
When I look into the quiz file it is correctly shown as true:
I have chosen in above question "True" (which is correct), but it shows that my answer was wrong The text in the notes is as follows (this actually appears in many notes):
When I look into the quiz file it is correctly shown as true:
So even correcting the quiz does not help, as the quiz file is correct. So far I have seen this problem only with True/False questions.
What am I doing wrong here?
Thanks for you help!