translationalneuromodeling / tapas

TAPAS - Translational Algorithms for Psychiatry-Advancing Science
https://translationalneuromodeling.github.io/tapas/
GNU General Public License v3.0
219 stars 90 forks source link

Computing learning rate #262

Open a-yur opened 8 months ago

a-yur commented 8 months ago

I would like to analyse learning rates as it was done by Lawson et al. (2017, Nature Neuroscience). In the paper, the learning rate alpha2 is defined as

alpha2(t) = (muhat(t,1) - muhat(t-1,1) ) / da(t,1),

where

muhat(t,1) = s(mu(t-1,2)).

Based on that definition, I would assume that the learning rate alpha3 would be computed as

alpha3 = (muhat(t,2) - muhat(t-1,2) ) / da(t,2),

where muhat is computed as follows, based on the general definition of muhat from Mathys et al. (2014, Frontiers in Human Neuroscience):

muhat(t,2) = mu(t-1,2)

However, I see that in Lawson et al. (2017, Nature Neuroscience), alpha3 is computed as

alpha3 = ((mu(t,3) - mu(t-1,3))/da(t,2)

What is the difference between the two versions of alpha3? Why in the paper alpha2 is computed using muhat at the first level, but alpha3 is computed using mu on the third level?