The target order should be determined once for every participant, by the experimental code. Or it should use a "randomnization" that always gives the exact same targets for the same participant.
We found out because we hit 'continue run' a few times and it started at the same trial but with a different target each time. What can happen in this case is that we don't have all targets in that block which will affect the statistics and results.
This is how I do it:
set the seed depending on participant ID and some set thing (could be experiment name):
seed(sum([ord(c) for c in 'illusory cursor tracking']) + (cfg['id'] * 9999))
Although beware there that the cfg['id'] is always a number, so you should maybe first concatenate the strings to do ord() on each character.
generate target / trial order for the whole experiment (based on specs)
The target order should be determined once for every participant, by the experimental code. Or it should use a "randomnization" that always gives the exact same targets for the same participant.
We found out because we hit 'continue run' a few times and it started at the same trial but with a different target each time. What can happen in this case is that we don't have all targets in that block which will affect the statistics and results.
This is how I do it:
seed(sum([ord(c) for c in 'illusory cursor tracking']) + (cfg['id'] * 9999))
Although beware there that the cfg['id'] is always a number, so you should maybe first concatenate the strings to do
ord()
on each character.generate target / trial order for the whole experiment (based on specs)
use that order throughout...