Closed cc-willchang closed 10 months ago
Thank you for using my plugin!
I added a new demonstration. Hope this is helpful to you.
If any questions remain, please don't hesitate to contact with me.
Best regards, Daiichiro
Hi @kurokida!
Thank you so much for the reply and demonstration! Unfortunately I still have some questions to follow up as it doesn’t seem to work quite right in my scenario:(
In my experiment, I need to present the auditory stimuli first (at this moment, no visual stimuli). This seems to be different from your example in which the visual stimuli appears at the very beginning of the trial.
Next, when the auditory sentence proceeds to the target word somewhere in the middle of the auditory sentence (actually, the offset of specific target words), the visual stimuli appear right at the moment. As you can see, I use several “jsPsych.timelineVariable(‘offset_time, true)” commands to control for the precise timing that I have pre-defined in a separate .js file.
When the participants see the visual stimuli, they will need to press the “s” or “l” key on the keyboard for lexical decision responses. Upon their responses, the visual stimuli should disappear but the auditory stimuli continues. I also hope to have the auditory sentence ends by itself and proceed to the comprehension question part (the second half part in my "token" timeline).
Based on your codes and my need, I make some changes (highlighted with four stars ****) at the end of the first part of the "token" timeline as follows:
/* initialize jsPsych */
const jsPsych = initJsPsych({
default_iti: 1000,
on_finish: function() {
jsPsych.data.displayData();
}
})
console.log(`jsPsych Version ${jsPsych.version()}`)
var timeline = [];
var welcome = {
type: jsPsychHtmlKeyboardResponse,
stimulus: "Welcome to the experiment. Press any key to begin."
};
timeline.push(welcome);
const audioStimulus = {
type: jsPsychAudioKeyboardResponse,
stimulus: function(){
return `${jsPsych.timelineVariable('file', true)}`
},
choices: "NO_KEYS",
response_ends_trial: false,
}
const preload = {
type: jsPsychPreload,
audio: audioStimulus,
}
const token = {
timeline: [
{
type: jsPsychPsychophysics,
response_start_time: function(){
return `${jsPsych.timelineVariable('offset_time', true)}`
}, // To prevent participants from responding before the visual target are presented
trial_duration: function(){
return `${jsPsych.timelineVariable('file_dur', true) + 3000}`
},
stimuli: [
{
obj_type: 'sound',
file: function(){
return `${jsPsych.timelineVariable('file', true)}`
},
show_start_time: 0, // from the trial start (ms),
choices: "NO_KEYS",
response_ends_trial: false
},
{
obj_type: 'text',
content: function(){
return `${jsPsych.timelineVariable('test_word', true)}`
},
show_start_time: function(){
return `${jsPsych.timelineVariable('offset_time', true)}`
}, // from the trial start (ms)
choices: ['s', 'l'], // The participant can respond to the stimuli using the 's' or 'l' key.
response_ends_trial: true
},
],
response_type: 'key',
// choices: ['s', 'l'], // The participant can respond to the stimuli using the 'y' or 'n' key.
prompt: 'Press the "s" key (word) or "l" key (non-word) to respond.',
canvas_height: 500,
****key_down_func: function(event){
if (event.key === 's' || event.key === 'l'){
const stim = jsPsych.getCurrentTrial().stim_array;
stim[1].show_end_time = 0; ****
}
},
},
{
type: jsPsychPsychophysics,
response_start_time: 0,
trial_duration: 5000,
stimuli:[
{
obj_type: 'text',
content: function(){
return `${jsPsych.timelineVariable('probe', true)}`
},
show_start_time: 0,
choices: ['s', 'l'], // The participant can respond to the stimuli using the 'y' or 'n' key.
response_ends_trial: true
},
],
// choices: ['s', 'l'], // The participant can respond to the stimuli using the 'y' or 'n' key.
prompt: 'Press the "s" key (yes) or "l" key (no) to respond.',
canvas_height: 500
},
],
background_color: '#FFFFFF',
timeline_variables: stimuli_list,
randomize_order: true
}
timeline.push(token)
/* start the experiment */
jsPsych.run(timeline);
I’ve changed the arguments in event.key; also, I assume that jsPsych.getCurrentTrial().stim_array will include two of the stimuli in the current trial (with the first one being the audio and the second one being the visual text), so I use stim[1].show_end_time = 0.
But the problem remains >> upon keyboard responses, the audio also ends.
I’m wondering if you would have any further insights on this. I feel like the SOA setting between the audio and the visual text seems to make things a bit complicated.
This is the cognition.run link of my experiment, which may help to illustrate the status of the current experiment: https://ux4ovwszyg.cognition.run/
Thank you so much again for your time! Really appreciate your help!
Best regards, Will
Thank you for your explanation.
To solve the problem, it is necessary to make a clear distinction between properties that can be specified on a stimulus object and those that can be specified on the plugin/trial object.
You can't specify the choices
and response_ends_trial
in a stimulus object (sound and text). These properties can be specified in the plugin/trial object.
{
type: jsPsychPsychophysics,
response_start_time: function(){
return `${jsPsych.timelineVariable('offset_time', true)}`
}, // To prevent participants from responding before the visual target are presented
trial_duration: function(){
return `${jsPsych.timelineVariable('file_dur', true) + 3000}`
},
stimuli: [
{
obj_type: 'sound',
file: function(){
return `${jsPsych.timelineVariable('file', true)}`
},
show_start_time: 0, // from the trial start (ms),
},
{
obj_type: 'text',
content: function(){
return `${jsPsych.timelineVariable('test_word', true)}`
},
show_start_time: function(){
return `${jsPsych.timelineVariable('offset_time', true)}`
}, // from the trial start (ms)
},
],
response_type: 'key',
choices: "NO_KEYS", // You shouldn't specify "s" or "l" here.
prompt: 'Press the "s" key (word) or "l" key (non-word) to respond.',
// You probably don't need to specify the response_ends_trial property.
canvas_height: 500,
key_down_func: function(event){
if (event.key === 's' || event.key === 'l'){
const stim = jsPsych.getCurrentTrial().stim_array;
stim[1].show_end_time = 0; // This is correct!
}
},
},
Note that the key_down_func
can ignore the settings of choices
("NO_KEYS") and receive key input. In addition, the key_down_func
doesn't terminate this trial.
Best, Daiichiro
Hi @kurokida:
Thanks much for your elaboration and modification, and I think it is getting much closer to my goal now! For now, the problem of stimuli is fixed (the audio will not be terminated upon keyboard responses to the visual stimuli!). However, the new problem now is that the program is not recording the RT data and key response of "s" or "l" for the visual stimuli (the lexical decision task).
I think this is because we set choice
to "NO_KEYS"
. But just as you said, this is necessary to achieve my aforementioned goal for stimuli.
I've tried (1) to bring key_down_func
before choice
, and (2) check "key_press" column in the output data, but the data is still missing.
I would appreciate your suggestion on this very much, and I'll keep trying in the meantime!
Thanks!
Best regards, Will
I'm glad the problem is being resolved.
About recording responses, see this change log.
If any questions remain, please don't hesitate to contact with me.
Best regards, Daiichiro
Hi @kurokida:
Thank you so much for the solution! Now the responses can be recorded. However, I tried to change the referencing variable stim_onset
in your code to accommodate my need (as I would like to have RT reflect the keyboard response time after the visual stimuli onset, rather than the audio stimuli onset)
Here's my modified codes in key_down_func
:
key_down_func: function(event){
if (stim_onset === null) return;
if (event.key === 's' || event.key === 'l'){
visual_onset = jsPsych.timelineVariable('offset_time');
RT = performance.now() - visual_onset;
if (event.key === 's') resp_key = 's';
if (event.key === 'l') resp_key = 'l';
const stim = jsPsych.getCurrentTrial().stim_array;
stim[1].show_end_time = 0;
}
},
I tried to use jsPsych.timelineVariable('offset_time') to query the visual stimuli onset time in my predefined .js file, but it seems like the program did not capture the values and further return null RTs.
For now, I tried to get the RT data from visual stimuli onset to keyboard response in a sort of more post-hoc way by manipulating output data, something like data.lexical_decision_RT = RT - jsPsych.timelineVariable('offset_time')
in the on_finish
function.
But I'm still wondering if there's anyway to do something like using performance.now()
to capture the time information at the visual stimuli onset and to return the RT data just-in-moment.
The other thing is about the setting I made to only permits participants to response after the visual stimuli onset:
response_start_time: function(){
return `${jsPsych.timelineVariable('offset_time', true)}`
This set-up seems to be overridden now, which allows participants' response even from the start of the audio file. I'm not sure whether it could be due to the use of on_start
property. In this case, if the participants press the 's' or 'l' key before the pre-defined show_start_time
of the visual stimuli, the visual stimuli will not show up at all in the trial.
I'm attaching the full code below just in case I have unintentionally omitted some important details in my description:
const priming = {
type: jsPsychPsychophysics,
response_start_time: function(){
return `${jsPsych.timelineVariable('offset_time', true)}`
}, // To prevent participants from responding before the visual target are presented
trial_duration: function(){
return `${jsPsych.timelineVariable('file_dur', true) + 3000}`
},
stimuli: [
{
obj_type: 'sound',
file: function(){
return `${jsPsych.timelineVariable('file', true)}`
},
show_start_time: 0, // from the trial start (ms),
},
{
obj_type: 'text',
content: function(){
return `${jsPsych.timelineVariable('test_word', true)}`
},
show_start_time: function(){
return `${jsPsych.timelineVariable('offset_time', true)}`
}, // from the trial start (ms)
},
],
response_type: 'key',
choices: 'NO_KEYS', // The participant can respond to the stimuli using the 'y' or 'n' key.
prompt: 'Press the "s" key (word) or "l" key (non-word) to respond.',
canvas_height: 500,
key_down_func: function(event){
if (stim_onset === null) return;
if (event.key === 's' || event.key === 'l'){
RT = performance.now() - stim_onset;
if (event.key === 's') resp_key = 's';
if (event.key === 'l') resp_key = 'l';
const stim = jsPsych.getCurrentTrial().stim_array;
stim[1].show_end_time = 0;
}
},
on_start: function(priming){
stim_onset = performance.now(); // Time the trial started
},
on_finish: function(data){
data.lexical_decision_RT = RT - jsPsych.timelineVariable('offset_time')
data.lexical_decision_response = resp_key;
data.type = jsPsych.timelineVariable('type');
data.context = jsPsych.timelineVariable('context');
data.priming = jsPsych.timelineVariable('priming');
// initialize for the next trial
stim_onset = null;
resp_key = null;
RT = null;
}
}
Really appreciated!
Best regards, Will
I probably solved the problem.
The two key points are as follows:
Point 1
const visual_onset = jsPsych.timelineVariable('offset_time', true); // "true" is needed.
RT2 = performance.now() - visual_onset; // What you want.
Point 2
key_down_func: function(event){ // Note that this function ignores most plugin settings (e.g., choices and response_start_time).
if (plugin_start_time === null) return;
if (performance.now() - plugin_start_time < jsPsych.timelineVariable('offset_time', true)) return;
I attach the complete program file. This will properly work at the psychophysics-demos folder. I have renamed the variable name of stim_onset
to plugin_start_time
so that the meaning is appropriate in this file.
While investigating the problem, I noticed that the first trial slowed down the playback of the audio file. Presumably this is due to preload of the relatively long-time audio file. Unfortunately, I am unable to resolve this issue. The problem does not probably occur after the second trial, so practice trials would work around the problem.
Another thing I noticed, although I have not strictly checked, is that the audio files don't stop at exactly the expected time. In other words, the audio file may take slightly extra time to stop. This might be because this program is forcing the audio file to stop in the middle.
Best regards, Daiichiro
Hi @kurokida:
Thank you very much for the solutions! I think the point 2 now is resolved and works properly.
Regarding point 1, I found the RT2 was recorded but the values is a bit weird (something like 22047.943, in ms?). I was wondering if it could be due to different time unit used by performance.now()
and jsPsych.timelineVariable('offset_time', true)
(which is in ms). I tried to use console.log(performance.now())
to check and also found the value very big as well.
However, if this is the case, it's also confusing for me that why if (performance.now() - plugin_start_time < jsPsych.timelineVariable('offset_time', true)) return;
works properly, but not RT2 = performance.now() - visual_onset;
.
Thanks!
Hi @cc-willchang ,
Thank you for your suggestions. I noticed my mistake.
// RT2 = performance.now() - visual_onset; // Sorry, I mistook.
RT2 = performance.now() - (visual_onset + plugin_start_time); // This is correct.
As you said, there is a different time unit used by performance.now()
and jsPsych.timelineVariable('offset_time', true)
.
Note that the plugin_start_time is in the former unit (timestamp) not in the latter one (milliseconds).
Could you check one more time?
Best regards, Daiichiro
Hi @kurokida:
Thank you so much for your explanation! I have revised the codes as you suggested, and now the RT2 can correctly record the values of my need.
I appreciate your kind and timely help very much along the way, which is really helpful for me to pick up more details in jsPsych and your wonderful plug-in.
Thanks again!
Best regards, Will
Hi @kurokida! I am trying to create a cross-modal lexical decision task and hosting it on cognition.run. In each trial, the participants will hear a sentence (auditory stimulus). Somewhere in the middle of the sentences (set up to be aligned with the offset of specific target words), a string of letter will pop up on screen for them to make lexical decisions (visual stimulus). It is really nice to use jspsych-psychophysics to control the SOA between the two stimuli.
However, as my title suggested, I wanted to end the visual stimuli when participants make lexical decisions responses by keyboard, but not to terminate the auditory sentences as I wanted them to listen to the rest of the sentences.
For now, when I make a keyboard response, the whole trial will be ended and proceed to the next trial. I've tried to set "response_ends_trial" to false for auditory stimuli and to true for visual stimuli, but it was not working. Wondering if there's any way to set up these things using parameters in jspsych-psychophysics.
Here's my current code:
Thank you very much!