coglabuzh / online-exps

Unleash the full potential of JavaScript and supercharge your web experiments with this library for experimental programming - based on jsPsych - work in progress! 🎯
2 stars 0 forks source link

Feature request: WM and LTM test #6

Open hannahdames opened 9 months ago

hannahdames commented 9 months ago

Stimuli (object-color-conjunctions or word-pairs) are presented; sequentially in groups of e.g. 5, then a test follows, after doing that n-times (depending on number of LTM-stimuli needed); there is a final LTM test

ajit283 commented 8 months ago

Working on this

LeaBartsch commented 7 months ago

Hi Ajit, How far are you with this? Do you need more input on the actual stimuli ( e.g. colored objects) that could be included here?

Further, would it be possible to add the functionality to say that some of the items are only shown once in the entire learning and others repeat N-times (but across different batches of 5 items)?

ajit283 commented 7 months ago

Hi Lea,

Currently, a basic version of the experiment is implemented: in the WM phase, words are presented after each other in groups of n (customizable in the settings) with the instruction to memorize them. In between each group there is a placeholder for a distractor task, which still needs to be determined. In the LTM phase, n words of the previously presented words are tested in a two-choice test (one word is an entirely new word, the other the previously shown word).

The experiment is currently in the examples folder, however, we will soon move experiments from there to a new repository, where it should be easier to run them.

I would appreciate input on further stimulus types and whether the current experiment setup is useful for you.

Further, would it be possible to add the functionality to say that some of the items are only shown once in the entire learning and others repeat N-times (but across different batches of 5 items)?

Yes, so would you would want to specify the frequency per word/item, right?

Best, Ajit

LeaBartsch commented 7 months ago

Great thank you!

another stimulus-type to use would be objects colored in the CIELAB color space, and recall being done by presenting the object in grey and people having to click on a color wheel that is placed around it. Here is a link to the stimuli: https://bradylab.ucsd.edu/stimuli.html --> the 540 Objects designed to be color-rotated in CIELAB color space

image

LeaBartsch commented 7 months ago

and yes, specifying the frequency would be exactly want I want!

ajit283 commented 7 months ago

Understood, working on this now! I'll upload a first version during the next week

ajit283 commented 7 months ago

Hi Lea, I implemented the ability to specify distinct frequencies, the feature to use colored objects is taking a bit longer since want to make it as flexible as possible for different stimulus types. There now is a section in the wiki on how to download and use experiments in the examples folder so you can try it out. One can specify the frequencies in the settings.ts file. The colored objects feature should be ready before Christmas.

LeaBartsch commented 6 months ago

thank you! Looks great.

ajit283 commented 6 months ago

Hi Lea, I just pushed the experiment with support for colored images to the dev branch. To switch between words and objects, the keys OBJECTS_TYPE and FREQUENCY_MAP need to be adjusted, I have provided examples in the settings.ts file. All the images are in the assets folder, and the rotation angles are defined in coloredObjectsList.ts, which is similar to wordlist.ts in that it provides all the possible stimuli the program can choose from. One issue is the color rotation, which is different from how it is done on the UCSD page. Rotating in LAB is somewhat difficult in JS. Using a more common color format such as HSL would make that easier, then one could also use the color picker in the LTM phase. I am not sure what the requirements for the color space are though.

LeaBartsch commented 6 months ago

Hi Ajit, Thanks a lot, I will have a more detailed look later this week. Unfortunately, we do need CIELAB space, as it is perceptually uniform.