translationalneuromodeling / tapas

TAPAS - Translational Algorithms for Psychiatry-Advancing Science
https://translationalneuromodeling.github.io/tapas/
GNU General Public License v3.0
217 stars 90 forks source link

Sensory uncertainty in gaussian jumping estimation #68

Closed HuSuyi closed 3 years ago

HuSuyi commented 5 years ago

Dear Prof. Dr. Mathys

Currently, I am working on my Ph.D. thesis with hierarchical Gaussian filter, which fascinates me totally. I have a question about the update of the sensory uncertainty in the file "tapas_hgf_jget.m":

piuhat(k) = 1/exp(kau *mua(k-1,1) +omu);

Since in this case the precision of the input u is updated after each time step depending on the difference between mux and input u, I would like to ask why piuhat is not also updated with piuhat at the previous time step. I would expect something like:

piuhat(k) = 1 / (1 / piuhat(k-1) + exp(kau *mua(k-1,1) +omu)) );

I would be very grateful if you could help me. Thank you very much for your time.

Best Regards Suyi Hu

chmathys commented 5 years ago

Dear Suyi,

The quick answer is that these are the update equations that you get when you invert the generative model according to the method described in the original HGF paper (Mathys et al., 2011).

The deeper reason for the nature of this update is that u does not depend on its own previous value in the generative model (in the model graph, it’s a diamond, not a hexagon). That’s why the previous value of the precision of the prediction (piuhat) is not relevant for its current value.

Best wishes, Chris

On 2 September 2019 at 10:12:31 am, suyihu (notifications@github.com) wrote:

Dear Prof. Dr. Mathys

Currently, I am working on my Ph.D. thesis with hierarchical Gaussian filter, which fascinates me totally. I have a question about the update of the sensory uncertainty in the file "tapas_hgf_jget.m":

piuhat(k) = 1/exp(kau *mua(k-1,1) +omu);

Since in this case the precision of the input u is updated after each time step depending on the difference between mux and input u, I would like to ask why piuhat is not also updated with piuhat at the previous time step. I would expect something like:

piuhat(k) = 1 / (1 / piuhat(k-1) + exp(kau *mua(k-1,1) +omu)) );

I would be very grateful if you could help me. Thank you very much for your time.

Best Regards Suyi Hu

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/translationalneuromodeling/tapas/issues/68?email_source=notifications&email_token=AEXIN7IIOYG374HEAYNXHXTQHTDG7A5CNFSM4IS2ZDJKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HIXFYIA, or mute the thread https://github.com/notifications/unsubscribe-auth/AEXIN7PXRHVUYD2SIWPGPYLQHTDG7ANCNFSM4IS2ZDJA .

HuSuyi commented 5 years ago

Dear Prof. Dr. Mathys

Thank you very much for your quick response. Your explanation makes a lot of sense. I have another question about the inversion of the generative model in this case: Does that mean for p(u(k)| alpha(k), ka, om) I don't have to calculate:

p(u(k) | alpha(k), ka, om) = p(u(k)|u(k-1), alpha(k), ka, om) * q(u(k-1)) ;

p(u(k) | alpha(k), ka, om) = N(u(k); u(k-1), exp(ka alpha(k) + om)) N(u(k-1); u(k-1), sigma_u (k-1)) ;

Since u(k) is not dependent on u(k-1), p(u(k) | alpha(k), ka, om) should be:

p(u(k) | alpha(k), ka, om) = N(u(k); u(k), exp(ka * alpha(k) + om))

Please correct me if I'm wrong.

Thank you very much for your help.

Best Regards

Suyi

chmathys commented 5 years ago

Dear Suyi,

The update equations as we derived them are in the code (in the file tapas_hgf_jget.m). There should be no need to change them.

Best wishes, Chris

On 2 September 2019 at 11:55:03 am, suyihu (notifications@github.com) wrote:

Dear Prof. Dr. Mathys

Thank you very much for your quick response. Your explanation makes a lot of sense. I have another question about the inversion of the generative model in this case: Does that mean for p(u(k)| alpha(k), ka, om) I don't have to calculate:

p(u(k) | alpha(k), ka, om) = p(u(k)|u(k-1), alpha(k), ka, om) * q(u(k-1)) ;

p(u(k) | alpha(k), ka, om) = N(u(k); u(k-1), exp(ka alpha(k) + om)) N(u(k-1); u(k-1), sigma_u (k-1)) ;

Since u(k) is not dependent on u(k-1), p(u(k) | alpha(k), ka, om) should be:

p(u(k) | alpha(k), ka, om) = N(u(k); u(k), exp(ka * alpha(k) + om))

Please correct me if I'm wrong.

Thank you very much for your help.

Best Regards

Suyi

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/translationalneuromodeling/tapas/issues/68?email_source=notifications&email_token=AEXIN7P5XES6WFF56PWO3FDQHTPHPA5CNFSM4IS2ZDJKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD5VK3ZA#issuecomment-527085028, or mute the thread https://github.com/notifications/unsubscribe-auth/AEXIN7IWFXPJX4ARHYFE2BLQHTPHPANCNFSM4IS2ZDJA .

HuSuyi commented 5 years ago

Dear Prof. Dr. Mathys

Thank you very much for your help.

Best Regards

Suyi