Closed jwdegee closed 6 years ago
Thanks @jwdegee! Can you add tests? Also, was it necessary to recompile the cython file (doesn't hurt, just curious)?
Hey @twiecki, how does one go about adding tests? It was not necessary to recompute the cython file (I installed the repo with "python setup.py develop", made the changes, and that worked).
It's probably fine, I'll merge this. Thanks for the contribution!
It would be very interesting to see how this recovers parameters compared to the Bayesian method implemented previously in HDDM. As far as I can tell, the logic is the same for both methods: assume that RTs on successful no-go trials and misses on go trials are missing data. I made some progress with this a year or two ago but life got in the way of completing it.
Hey Sam, indeed it would. I'm definitely a fan of the hierarchical approach. From the discussion group I got a sense that indeed quite a bit of progress was made here, but that no-one was 100% confident about the results. Perhaps comparing the hierarchical implementation to this "flat" quantile optimization approach might help. I could look into this at some point.
Here's a super short discription of this implementation: We fitted the model based on RT quantiles, using the so-called G square method. The RT distributions for yes- and no-choices were represented by the 0.1, 0.3, 0.5, 0.7 and 0.9 quantiles, and, along with the associated response proportions, contributed to G square. In the go/no-go task, a single bin containing the number of no-go-choices contributed to G square (Ratcliff et al., 2016).
So, indeed, RTs for no-go trials are treated as missing data, but the fraction of no-go trials is still used to constrain the fit.
For posterity here is the thread we are talking about (although the figures seem to be gone!).
IIRC, the Bayesian method also knows about the proportions of the various response types.
I think the thing to do would be to simulate a large amount of regular 2AFC data per set of DDM parameters, then recover them using the G-square and Bayesian methods. Then, filter the same simulated data so that it conforms to go/no-go, and recover again. Critically the two G-square recoveries should be similar to each other, as should the two Bayesian ones. It is less critical but still informative to determine how similar all of them are to one another, and to the true parameter values.
I would like to do this myself since I have a couple of papers I want to write on this topic, but unfortunately I think its going to be a long time until I get round to it.
Method originally described in: Ratcliff, R., Huang-Pollock, C., & McKoon, G. (2016). Modeling individual differences in the go/no-go task with a diffusion model. Decision, 5(1), 42-62 (http://psycnet.apa.org/record/2016-39470-001).