translationalneuromodeling / tapas

TAPAS - Translational Algorithms for Psychiatry-Advancing Science
https://translationalneuromodeling.github.io/tapas/
GNU General Public License v3.0
217 stars 90 forks source link

2-level HGF for binary inputs #35

Closed ghost closed 3 years ago

ghost commented 5 years ago

Dear Dr. Mathys,

I am trying to use the 2-level version of HGF for binary inputs. What I am doing is to modify the [c.mu_0mu; c.logsa_0mu; c.logkamu; c.ommu] in tapas_hgf_binary_config.m to fix the third level to 0. Is that correct? should I do something else? Or do you have a sample script for the 2-level HGF? I am just wondering whether what I am doing is correct or not.

Thank you very much in advance,

Best, Bin

chmathys commented 5 years ago

Dear Bin,

If you start with the default configuration file tapas_hgf_binary_config.m as it is when you download TAPAS, then all you need to do is something like

c.logkamu = [log(1), -Inf]; c.logkasa = [ 0, 0];

c.ommu = [NaN, -3, -6]; c.omsa = [NaN, 4^2, 0];

Two conditions need to be met:

Best wishes, Christoph

On 23 November 2018 at 11:34:17 am, binwang87828 (notifications@github.com) wrote:

Dear Dr. Mathys,

I am trying to use the 2-level version of HGF for binary inputs. What I am doing is to modify the [c.mu_0mu; c.logsa_0mu; c.logkamu; c.ommu] in tapas_hgf_binary_config.m to fix the third level to 0. Is that correct? should I do something else? Or do you have a sample script for the 2-level HGF? I am just wondering whether what I am doing is correct or not.

Thank you very much in advance,

Best, Bin

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/translationalneuromodeling/tapas/issues/35, or mute the thread https://github.com/notifications/unsubscribe-auth/AS6G_SD9644ENwTvuZygsDARLI7uwtkqks5ux88ogaJpZM4Ywbbl .

ghost commented 5 years ago

Dear Dr. Mathys,

Thank you very much for your quick reply. Everything is clear now. I fixed the third level to 0 as follows:

% mus and sigmas c.mu_0mu = [NaN, 0, 0]; c.mu_0sa = [NaN, 0, 0]; c.logsa_0mu = [NaN, log(0.1), 0]; c.logsa_0sa = [NaN, 0, 0];

% Rhos c.rhomu = [NaN, 0, 0]; c.rhosa = [NaN, 0, 0];

% Kappas c.logkamu = [log(1), -Inf]; c.logkasa = [ 0, 0];

% Omegas c.ommu = [NaN, -3, 0]; c.omsa = [NaN, 4^2, 0];

Is everything alright? Best, Bin

chmathys commented 5 years ago

That looks good.

Best wishes, Christoph

On 23 November 2018 at 12:19:27 pm, binwang87828 (notifications@github.com) wrote:

Dear Dr. Mathys,

Thank you very much for your quick reply. Everything is clear now. I fixed the third level to 0 as follows:

% mus and sigmas c.mu_0mu = [NaN, 0, 0]; c.mu_0sa = [NaN, 0, 0]; c.logsa_0mu = [NaN, log(0.1), 0]; c.logsa_0sa = [NaN, 0, 0];

% Rhos c.rhomu = [NaN, 0, 0]; c.rhosa = [NaN, 0, 0];

% Kappas c.logkamu = [log(1), -Inf]; c.logkasa = [ 0, 0];

% Omegas c.ommu = [NaN, -3, 0]; c.omsa = [NaN, 4^2, 0];

Is everything alright? Best, Bin

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/translationalneuromodeling/tapas/issues/35#issuecomment-441215734, or mute the thread https://github.com/notifications/unsubscribe-auth/AS6G_UyrjUVXj71SV8J9eTZHl5KOwdoSks5ux9m_gaJpZM4Ywbbl .

paulsowman commented 5 years ago

Hi, apologies for reopening an old thread.

I have been trying to do the same as above - i.e. instantiate a 2-level model by cutting off the 3rd level of the HGF binary model. However, I find that with those settings (suggested by Bin) any binary sequence longer that 369 elements returns:

Error using tapas_hgf_binary (line 223) Variational approximation invalid. Parameters are in a region where model assumptions are violated.

According to the help this error can be alleviated by reducing the omegas or kappas i.e.

% - If you get an error saying that the prior means are in a region where model assumptions are % violated, lower the prior means of the omegas, starting with the highest level and proceeding % downwards. % % - Alternatives are lowering the prior means of the kappas, if they are not fixed, or adjusting % the values of the kappas or omegas, if any of them are fixed.

This approach works with the 3 level model - I can get longer sequences (up to 2000 elements) by changing to

c.ommu = [NaN, -6, -9];

If the 3rd level omega is set to 0 however, this doesn't work.

The Stephanics JNeurosci paper DOI: https://doi.org/10.1523/JNEUROSCI.3365-17.2018 suggests this should work so I must be making an error somewhere.

Many thanks, Paul

ianthe00 commented 5 years ago

Hi paulsowman,

this thread relates to the related thread asking about the HGF not working for longer sequences of trials: https://github.com/translationalneuromodeling/tapas/issues/54#issuecomment-479309151

In that thread we suggested to change the priors on omega to lower levels (both for omega2 and 3) So instead of c.ommu = [NaN, -3 -6];

We changed it to these values a few months back (400 trials only)

c.ommu = [NaN, -4 -7];

It seems that for 2000 trials this works (see other thread)

c.ommu = [NaN, -6, -9];

Your issue now is that for a 2 HGF model you tried to set omega3 = 0, however that is not necessary as what you need to change -- in my understanding -- is the variance of omega3 to 0, so you effectively fix omega3 and do not let it vary. But omega3 does not need to be = 0. That means that the model assumes that participants are not updating their beliefs about environmental volatility (level 3).

So, technically, if you keep this for 2000 trials c.ommu = [NaN, -6, -9]; But set c.omsa = [NaN, 4^2, 0];

And change Kappa as C. Mathys suggested: c.logkamu = [log(1), -Inf]; c.logkasa = [ 0, 0];

Then you effectively have a 2-level HGF that works* for 2000 trials. This is what we did for our 400 trials, 2-levels HGF to compare it to the 3-levels HGF and it works.

Quoting Mathys: "- All parameters at the third level need to be fixed (i.e., their prior variances need to be 0). This tells the optimization algorithm not to optimize them, which wouldn’t make sense since they are cut off from any information flow, so their values don’t matter."

Hope this helps,

Best, ianthe

paulsowman commented 5 years ago

Thanks again for your help. This is great. Much appreciated. Paul

milan-andrejevic commented 2 years ago

Hi,

I am trying to run simulations on this two-level binary model. I have constrained log kappa (c.logkamu) to -Inf, fixed all the third-level parameters in line with instructions above, and I manage to find bayes optimal parameter values with that setup that show a reasonable belief trajectory.

However, when I try to simulate beliefs using these optimal parameter values (using tapas_simModel function and the same binary perceptual model 'tapas_hgf_binary'), I get an error:

Error using tapas_hgf_binary (line 215) Variational approximation invalid. Parameters are in a region where model assumptions are violated.

Error in tapas_simModel (line 164) [r.traj, infStates] = prc_fun(r, r.p_prc.p);

I've tried shifting these optimal parameters around, lowering omegas and kappas to no avail. I also tried using ehgf, which also throws a similar error:

Error using tapas_simModel (line 170) NaNs in infStates (muhat). Probably due to numerical problems when taking logarithms close to 1.

The only thing that fixes the problem is changing the c.logkamu to any value other then -Inf.

It's pretty strange for optimal parameters to be in a region where model assumptions are violated, so I reckon it has something to do with forcing a two level structure in this way. Just wondering if you have any advice how to solve this or work around it?

Thanks heaps!

Best, Milan

milan-andrejevic commented 2 years ago

Hi,

I think I just figured out a response to my own question, it looks like tapas_simModel function takes parameters that are in linear and not in log space (i.e. exp(-Inf) = 0).

Best, Milan