Closed Wei940127 closed 9 months ago
I think it did this because there was only 1 item in the set. Wei, please attach the data to the issue. @MegaGeese @JRustyHaner Does the learning session terminate when it would need to repeat the same item back to back, i.e. only 1 item left? We need to keep it picking until there are none left....
Here is the data for the learning session and the item with this issue is {text:chicken - } Modelunit.txt
Here is the .zip file for the learning unit. HMRT2T1model.zip
@MegaGeese I analyzed this carefully, and it appears it stops because it would have to pick the same item twice... Chicken-farmer is in the condition that is needs to have 3 correct outcomes before being marked available= false
I just tried it and the results showed that the model seems not working. When the “forceSpacing” is set to True, the issue with the last high-initial- learning item (requiring three consecutive answers) is still there. When the “forceSpacing” is set to False, it seems that the items are not randomly selected. Some high-initial-learning items are presented consecutively. Below are the presentation sequences for the last two items, and the {text: rose-} was presented 2 times but the {text: clown-} was presented 4 times, both were presented consecutively, which is not ideal for enhancing initial learning degree. Please see the below image for reference. Also attached the zip files in case you need them. ModelSpacingFalse.zip ModelSpacingTrue.zip
True was not the right setting. That could not be expected to work. I'm not sure why you would try it.
I will investigate this issue, but it seems to me that you're complaining party about the algorithm that you put in the system which results in a lot of massed practice. You may need to rethink that algorithm that I suggested to you. I was never presenting it as the right algorithm. I was presenting it as an option that matched many of your requirements.
On Wed, Feb 28, 2024, 10:04 AM Wei Chu @.***> wrote:
I just tried it and the results showed that the model seems not working. When the “forceSpacing” is set to True, the issue with the last high-initial- learning item (requiring three consecutive answers) is still there. When the “forceSpacing” is set to False, it seems that the items are not randomly selected. Some high-initial-learning items are presented consecutively. Below are the presentation sequences for the last two items, and the {text: rose-} was presented 2 times but the {text: clown-} was presented 4 times, both were presented consecutively, which is not ideal for enhancing initial learning degree. Please see the below image for reference. Also attached the zip files in case you need them. Screenshot.2024-02-28.094642.png (view on web) https://github.com/memphis-iis/mofacts/assets/54991536/0251e387-e187-45e4-89de-b2cf8aa9d94f ModelSpacingFalse.zip https://github.com/memphis-iis/mofacts/files/14436662/ModelSpacingFalse.zip ModelSpacingTrue.zip https://github.com/memphis-iis/mofacts/files/14436663/ModelSpacingTrue.zip
— Reply to this email directly, view it on GitHub https://github.com/memphis-iis/mofacts/issues/1401#issuecomment-1969307346, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADDLPK2OLTGMN3XPPVWNYB3YV5IPBAVCNFSM6AAAAABDFF5KNCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNRZGMYDOMZUGY . You are receiving this because you were assigned.Message ID: @.***>
Basically @Wei940127 this is not a statistically unlikely event given the algorithm... Has it done this twice or 3 times?
OR, are you saying this is the real bug "but the {text: clown-} was presented 4 times" and it is correct 4 times? that can't happen, so this that the real problem (which you do not specifically mention)?
@Wei940127 Send the whole data trace
@Wei940127 strangely, I'm reading the top issue and don't recognize it... We never finished this issue... I haven't even been aware of this issue. The work rusty did was not for this issue. it was for a different issue. Not sure how they were confused
Here is the data, ID 0228Modeltest1 is the test with "forceSpacing" set at False. ID 0228Modeltest2 is the test with "forceSpacing" set at True. HMRT2T1model-data.txt
I do see instances where p.available is getting set to false but the system is still able to choose that item so there does seem to be some bugs in this part of mofacts.
I think I've found the root cause of the issue. It looks like when a question is answered the p values arent updates in time before the probability function runs.
I had answered a stimuli twice but only [1]
would show in p.clusterOutcomeHistory for that cluster
For the last presented item that was assigned to the 3-consecutive-correctness condition, the model failed to distinguish between Correct and Incorrect responses, resulting in only three repetitions of that item disregarding its Outcome value. Below are the codes.
"calculateProbability": "p.available = (p.stimStudyTrialCount <= 1) ? 'study' : 'drill';\n \n switch (true) {\ncase (p.clusterIndex >= 0 && p.clusterIndex <= 5):\nif (p.stimStudyTrialCount >= 2) p.available = 'false';\nbreak;\ncase (p.clusterIndex >= 6 && p.clusterIndex <= 11):\nif (p.stimSuccessCount >= 1) p.available = 'false';\nbreak;\ncase (p.clusterIndex >= 12 && p.clusterIndex <= 17):\n if (pFunc.arrSum(p.clusterOutcomeHistory.slice(Math.max(p.clusterOutcomeHistory.length-3,0), p.clusterOutcomeHistory.length))==3) p.available = 'false';\nbreak;\n}\n \nif (p.available !== 'false') {\np.probability = Math.random();\n } \n return p;\n"