Closed milan-andrejevic closed 1 year ago
Hi Milan - since you posted this issue, we have been in contact, but before I close it, I'll give a brief public answer.
Your strategy is exactly right: simulate in order to find out whether your paradigm generates enough information to estimate all parameters of interest to the required precision. This can be hard to achieve with binary responses because each such response contains only one bit of information, and precisely estimating, say, 4 parameters with 60 or even 200 bits of information is a big ask, whatever model you're using. Therefore the general recommendation is to use responses on a continuous scale (reaction time, location, quantity, etc.)
Dear Chris & Team,
I am designing a new paradigm and I would like test how HFG parameters account for differences in response outcomes across experimental conditions, as well as potentially look into individual differences down the track. My goal is to ensure the paradigm I am designing, and the learning sequence I am presenting, result in recoverable parameters.
I noticed that there are striking differences across the literature in the number of trials starting from 50 (Siegel et al., 2018, Nat Hum Behav) over 200 (Diaconescu et al., 2014, Plos Comp Bio) and up to 300 and 600 in your introductory HGF papers.
I am wondering if you have any broad recommendations for achieving greater design efficiency (i.e. greater recoverability precision) with fewer trials?
For instance, what design features allowed Siegel et al. (2018) to reliably capture effects (effect sizes ranging from r=.35-.62) on $\omega_2$ with only 50 trials?
Do some models (e.g. binary outcome models) require more trials as compared to others (e.g. continuous outcome models)?
Does the number of levels of HGF also matter (e.g. models with less parameters should be more easily recoverable)?
Do some features of sequences (e.g. outcome probability schedules for binary outcomes) drive efficiency?
I've been doing simulations using tapas_hgf_binary and tapas_unitsq_sgm models for a while now (generating data and then fitting the data using default options for priors) and I am struggling to identify feasible sequences of binary outcomes (up to 200 trials) that give me sufficiently precise recovery to detect the kinds of effects reported in Siegel et al. (2018). Any advice on how to continue my search would be very helpful! Thanks in advance!
Kind regards, Milan
-- Milan Andrejevic Cognition and Philosophy Lab Monash University