Open Heechberri opened 1 year ago
Hi Vae - just vey quickly, scanning your problem, I suspect that at some point you're taking the log (or square root) of a negative number, which gives you a complex result. This probably happens because you're using RT's instead of log-RT's. When you use a Gaussian model with RT's, negative values are possible according to that model, which doesn't make sense. The solution is to use either a log-Gaussian model with RT's or a Gaussian model with log-RT's
Hi @chmathys
Thanks for the reply!
I have generated another set of parameters using logrt, and all the parameters look fine :) Thank you for your help.
Another issue is that I would like to include the subjects' actual response (categorical) alongside the RT and stimulus train when using HGF. I came across a poster from your lab at CCN this year discussing this exact application. Is the script for this already available in tapas?
Lastly, I still can't figure out how to get the equations for expected uncertainty and unexpected uncertainty in the logrt_linear_whatworld response model, could you give me some hints? below are the equations that I am referring to:
% mu1 contains the actually occurring transition -> multiply with
% mu1hat to get probability of that transition (other elements are
% zero)
otp = mu1.*mu1hat; % observed transition probabilities (3-dim)
otps3 = sum(otp, 3, 'omitnan'); % sum over 3rd dim
otps23 = sum(otps3, 2, 'omitnan'); % sum over 2nd dim
surp = -log(otps23);%according to shannon's formula the information contained in a data set D is given by -log P (D)
surp(r.irr) = [];
% Expected uncertainty
% ~~~~~~~~~~~~~~~~~~~~
euo = mu1.*sa2; % expected uncertainty of observed transition (3-dim)
euos3 = sum(euo, 3, 'omitnan'); % sum over 3rd dim
euos23 = sum(euos3, 2, 'omitnan'); % sum over 2nd dim
to = mu1.*mu2; % tendency of observed transition (3-dim)
tos3 = sum(to, 3, 'omitnan'); % sum over 3rd dim
tos23 = sum(tos3, 2, 'omitnan'); % sum over 2nd dim
eu = tapas_sgm(tos23,1).*(1-tapas_sgm(tos23,1)).*euos23; % transform down to 1st level
eu(r.irr) = [];
% Unexpected uncertainty
% ~~~~~~~~~~~~~~~~~~~~~~
ueu = tapas_sgm(tos23,1).*(1-tapas_sgm(tos23,1)).*exp(mu3); % transform down to 1st level
ueu(r.irr) = [];
Thank you so much!!!
Regards, Vae
Hi Vae - @alexjhess might be able to help you with multimodal responses.
Regarding the uncertainty calculations, much of that is about finding the right values in multidimensional arrays. I suggest using the debugger to look at how this unfolds step by step. The only substantive thing that happens is that once we have the desired values (euos23 and exp(mu3)), we need to transform them down to the 1st level so they are on the same scale as the other predictors in the linear model. An explanation of this transformation is in the supplementary material to Iglesias et al. (2013) https://doi.org/10.1016/j.neuron.2013.09.009.
Hi @Heechberri,
All the functionality you need to fit your models to multiple response data modalities is already implemented in tapas. The most straightforward way to build a customized response model for your application is to create a new obs model where you essentially sum the log likelihood of separate response data modalities. There is no concrete example model implemented in the toolbox yet, maybe we can include one in the next release (@chmathys).
Anyways, we hope to be able to upload a preprint of the work you were referring to including example code and models by the end of this month. Feel free to contact me via hess@biomed.ee.ethz.ch in case of delays or if you're in desperate need of example code in the mean time. Hope this is helpful.
All the best, Alex
@chmathys
Thank you for referring me to Igelesias et al supplementary, it really helped me understand more of the transforms. Since $dx_1=\frac{dx_1}{dx_2}\cdot dx_2$ only applies to the relationship between $x_1$ and $x_2$ then $s(x)(1-s(x))dx_2$ would only apply to transforms for level 2 right? For level 3 transforms it will be $$dx_1=\frac{dx_1}{dx_2}\cdot\frac{dx_2}{dx_3}\cdot dx_3$$ and that will be $$s(x)(1-s(x))e^{(x_3)}dx_3$$ since the relationship between $x_2$ and $x_3$ is $e^{\kappa +\omega}$?
How about those variables that are calculated from two different levels? If I wanted to transform precision weight at level 3 (which is $\frac{\pi_2}{\pi_3}$) down to level 1, would I apply tapas_sgm(tos23,1).(1-tapas_sgm(tos23,1)).exp(mu3).*psi3 or do I have to do it separately for $\pi_2$ (transform from level 2 to 1) and $\pi_3$ (transform from level 3 to 2) and calculate precision weight at level 3 from these transformed values?
@alexjhess Thanks for your response! I see... do you mean editing fitmodel.m to 1) accept two response model, 2) sum logLl from both response models, and 3) sum negLogJoint from both logLI and their priors as well?
Here are the code sniplets farom fitmodel.m that I am talking about.
trialLogLls = obs_fun(r, infStates, ptrans_obs);
logLl = sum(trialLogLls, 'omitnan');
negLogLl = -logLl;
logObsPriors = -1/2.*log(8*atan(1).*r.c_obs.priorsas(obs_idx)) - 1/2.*(ptrans_obs(obs_idx) - r.c_obs.priormus(obs_idx)).^2./r.c_obs.priorsas(obs_idx);
logObsPrior = sum(logObsPriors);
negLogJoint = -(logLl + logPrcPrior + logObsPrior);
Hahaha... as I am a novice coder, I am not confident in coding these by myself so I think I will still email you for the sample code soon!
Thank you sooo much for your help! looking forward to the publication!!
Regards, Vae
@Heechberri
No, I was thinking of something even simpler than modifying the fitModel.m function, namely creating a new response model that consists of a combination of several response models for different data stream. Your function could look something like this:
function [logp, yhat, res, logp_split] = comb_obs(r, infStates, ptrans)
% part of obs model for binary choices
[logp_bin, yhat_bin, res_bin] = tapas_unitsq_sgm(r, infStates, ptrans);
% part of obs model for continuous response data modality (e.g. RTs)
[logp_rt, yhat_rt, res_rt] = tapas_logrt_linear_whatworld(r, infStates, ptrans);
% calculate log data likelihood (assuming independence)
logp = logp_bin + logp_rt;
yhat = [yhat_bin yhat_rt];
res = [res_bin res_rt];
logp_split = [logp_bin logp_rt];
end
Of course you would need to add the corresponding files '_config.m', '_transp.m', '_namep.m' and '_sim.m' that are used in the fitModel and simModel routines of the HGF Toolbox.
Hope this helps, otherwise shoot me an e-mail. :)
Hi @Heechberri,
I just wanted to let you know that we have uploaded a preprint introducing some example HGF response models that simultaneously model binary choices and continuous RTs. You can check it out at https://www.biorxiv.org/content/10.1101/2024.02.19.581001v1 and there are also links to code and data included.
Cheers!
Hi! Thanks! This is awesome!
Regards, Vae
On Fri, 23 Feb 2024 at 6:26 PM, alexjhess @.***> wrote:
Hi @Heechberri https://github.com/Heechberri,
I just wanted to let you know that we have uploaded a preprint introducing some example HGF response models that simultaneously model binary choices and continuous RTs. You can check it out at https://www.biorxiv.org/content/10.1101/2024.02.19.581001v1 and there are also links to code and data included.
Cheers!
— Reply to this email directly, view it on GitHub https://github.com/translationalneuromodeling/tapas/issues/248#issuecomment-1961074649, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALSMZKOKJPLPEZ6AAUWAMHLYVBVDZAVCNFSM6AAAAAA5MP3JHOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNRRGA3TINRUHE . You are receiving this because you were mentioned.Message ID: @.***>
Hi HGF experts,
I am a newbie in coding and computational neuroscience, so please be patient with me ;D
I am using tapas_hgf_whatworld, tapas_logrt_linear_whatworld (edited, edits explained below), and tapas_quasinewton_optim to model the Bayesian inference parameters the reaction time from a pSRTT task (for experiment details see below, adapted from Marshall et al, 2016).
About half of the model outputs contain complex numbers. Here is a sample of the output:
Changes made to the response model logrt_linear_whatworld (attached).
1. I am using original RT (milliseconds/100) values instead of log RT. RT in milliseconds is divided by 100 to scale the values of the reaction time down closer to the scale of the Bayesian parameters which are around the scan of 10 to the power of 1 to -2
2. I changed the response variables to RT~ be0 + be1.prediction error at level 1 + be2.precision weights at level 2 + be3.precision weighted prediction error at level 3, and be4.mean of level 3 + be5.post error slowing + ze.
There are two things I couldn't understand about the original script,
1. Why were the trajectories not used directly in the response model instead of the infStates?
I assumed that Marshall et al, 2016 used the linear_logrt_whatworld scripts, however, the variables are different from what is reported in Figure 3 of the paper. From Figure, I assumed that the variable of interest is first-level PE, third-level precision weighted prediction errors, and third-level mean and post error slowing. These perceptual parameters can already be found in the traj variable. I tried to use actual traj variables (attached as whatworld3 family of scripts) and equations from mathys et al. 2011;14 to derive the variables I needed (attached as whatworld2 family of scripts) and both produced complex numbers.
From my understanding, the first equation in linear_logrt_whatworld is calculating Shannon's surprise, i.e. -log(probability seeing that stimulus), but I don't quite understand the other two equations. which brings me to my next question
2. Why were the calculated response variables "transformed to 1st level" and how are they transformed to the first level?
I am well aware that my edits could have caused the weird problems, and I have spent quite some time trying to troubleshoot myself to no avail... Thank you for all the patience and understanding and help!
Vae
tapas_rt_linear_whatworld2.txt tapas_rt_linear_whatworld3.txt tapas_hgf_whatworld2.txt tapas_hgf_whatworld3.txt tapas_hgf_whatworld3_config.txt tapas_rt_linear_whatworld3_config.txt tapas_rt_linear_whatworld2_config.txt tapas_hgf_whatworld2_config.txt
I also turned on verbose (in tapas_quasinewton_optim_config) for one of the subjects just to see what is going on and here is the output if it helps. After about 9 iterations the improvements were really very small (less than 1), I am not sure why the program is still optimizing.