lnccbrown / HSSM

Development of HSSM package
Other
70 stars 10 forks source link

Custom Group-Level Priors Not Propagating to Individual-Level Priors When Hierarchical=True #431

Open gingjehli opened 1 month ago

gingjehli commented 1 month ago

When the 'hierarchical=True' option is set and custom priors are specified, the group-level priors do not automatically apply to the individual-level priors. Instead, individual-level priors continue to default to their predefined settings. Would be good to fix this a/o mention it in the documentation. For now, it seems that all priors for all levels need to be set individually (AND hierarchical has to be set to False) if one wants to fully leverage the use of self-specified priors.

Here is an example:

sm1_full2 = hssm.HSSM( data=data, model="angle", hierarchical=True, prior_settings=None, loglik_kind="approx_differentiable", include=[ { "name": "v", "formula": "v ~ 1 + (sub_cond_rewardLOG + sub_cond_averse)*conflictDomain*condition2 + (1 + (sub_cond_rewardLOG + sub_cond_averse)*conflictDomain*condition2 | participant_id)", "prior": { "Intercept": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, # Using a "large" prior for covariates "sub_cond_averse": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, }, "link": "identity", "bounds": (-10.0, 10.0) },], ) sm1_full2

Below is a snippet of the corresponding output - the participant-level priors are still the default priors. This is misleading:

Formula: v ~ 1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2 + (1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2 | participant_id) Priors: v_Intercept ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse ~ Normal(mu: 0.0, sigma: 2.5) v_conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_1|participant_id ~ Normal(mu: 0.0, sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_rewardLOG|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_averse|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_conflictDomain|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_rewardLOG:conflictDomain|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_averse:conflictDomain|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_rewardLOG:condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_averse:condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_conflictDomain:condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_rewardLOG:conflictDomain:condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_averse:conflictDomain:condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) Link: identity Explicit bounds: (-10.0, 10.0)

So, to fully propagate the self-defined priors, 'hierarchical=True' had to be set to false as shown below:

sm1_full1 = hssm.HSSM( data=data, model="angle", hierarchical=False, prior_settings=None, loglik_kind="approx_differentiable", include=[ { "name": "v", "formula": "v ~ 1 + (sub_cond_rewardLOG + sub_cond_averse)*conflictDomain*condition2 + (1 + (sub_cond_rewardLOG + sub_cond_averse)*conflictDomain*condition2 | participant_id)", "prior": { "Intercept": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, # Using a "large" prior for covariates "sub_cond_averse": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, }, "link": "identity", "bounds": (-10.0, 10.0) },], ) sm1_full1

In this case, the priors are now correctly propagated to the participant layer :)

Here an output snippet:

v: Formula: v ~ 1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2 + (1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2 | participant_id) Priors: v_Intercept ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse ~ Normal(mu: 0.0, sigma: 2.5) v_conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_1|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 2.5)) v_sub_cond_rewardLOG|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 2.5373001098632812)) v_sub_cond_averse|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 2.486599922180176)) v_conflictDomain|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 5.008999824523926)) v_sub_cond_rewardLOG:conflictDomain|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 4.127600193023682)) v_sub_cond_averse:conflictDomain|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 3.574899911880493)) v_condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 5.0)) v_sub_cond_rewardLOG:condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 3.5920000076293945)) v_sub_cond_averse:condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 3.4658000469207764)) v_conflictDomain:condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 5.855800151824951)) v_sub_cond_rewardLOG:conflictDomain:condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 5.573299884796143)) v_sub_cond_averse:conflictDomain:condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 4.914400100708008)) Link: identity Explicit bounds: (-10.0, 10.0)

Though, you can see that the scaling of the sigmas at the individual level isn't done automatically. Would be good to fix this and/or mention all of the above in the documentation).

frankmj commented 1 month ago

I agree this needs to be clearer in the docs - Hierarchical=True is actually only applied to the intercepts. My original request for this argument was so that we can have one setting that allows the entire model to be hierarchical without having to specify the whole notation to ensure it is all hierarchical with all the corresponding "| participant_id"s (so that it would default to HDDM where everything is hierarchical). But it turned out that doing so in a way that is fully general for all situations was not trivial so the implementation started with just making the intercepts hierarchical and not all regression terms. in your case you also specified the regression terms to have the hierarchical parts (explicitly with | participant_id) so it added those but then the priors aren't propagated.

Michael

On Tue, May 14, 2024 at 11:14 AM Nadja Ging-Jehli @.***> wrote:

When the 'hierarchical=True' option is set and custom priors are specified, the group-level priors do not automatically apply to the individual-level priors. Instead, individual-level priors continue to default to their predefined settings. Would be good to fix this a/o mention it in the documentation. For now, it seems that all priors for all levels need to be set individually (AND hierarchical has to be set to False) if one wants to fully leverage the use of self-specified priors.

Here is an example:

sm1_full2 = hssm.HSSM( data=data, model="angle", hierarchical=True, prior_settings=None, loglik_kind="approx_differentiable", include=[ { "name": "v", "formula": "v ~ 1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2 + (1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2 | participant_id)", "prior": { "Intercept": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, # Using a "large" prior for covariates "sub_cond_averse": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, }, "link": "identity", "bounds": (-10.0, 10.0) },], ) sm1_full2

Below is a snippet of the corresponding output - the participant-level priors are still the default priors. This is misleading:

Formula: v ~ 1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2

  • (1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2 | participant_id) Priors: v_Intercept ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse ~ Normal(mu: 0.0, sigma: 2.5) v_conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_1|participant_id ~ Normal(mu: 0.0, sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_rewardLOG|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_averse|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_conflictDomain|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_rewardLOG:conflictDomain|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_averse:conflictDomain|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_rewardLOG:condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_averse:condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_conflictDomain:condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_rewardLOG:conflictDomain:condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_averse:conflictDomain:condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) Link: identity Explicit bounds: (-10.0, 10.0)

So, to fully propagate the self-defined priors, 'hierarchical=True' had to be set to false as shown below:

sm1_full1 = hssm.HSSM( data=data, model="angle", hierarchical=False, prior_settings=None, loglik_kind="approx_differentiable", include=[ { "name": "v", "formula": "v ~ 1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2 + (1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2 | participant_id)", "prior": { "Intercept": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, # Using a "large" prior for covariates "sub_cond_averse": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, }, "link": "identity", "bounds": (-10.0, 10.0) },], ) sm1_full1

In this case, the priors are now correctly propagated to the participant layer :)

Here an output snippet:

v: Formula: v ~ 1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2

  • (1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2 | participant_id) Priors: v_Intercept ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse ~ Normal(mu: 0.0, sigma: 2.5) v_conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_1|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 2.5)) v_sub_cond_rewardLOG|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 2.5373001098632812)) v_sub_cond_averse|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 2.486599922180176)) v_conflictDomain|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 5.008999824523926)) v_sub_cond_rewardLOG:conflictDomain|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 4.127600193023682)) v_sub_cond_averse:conflictDomain|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 3.574899911880493)) v_condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 5.0)) v_sub_cond_rewardLOG:condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 3.5920000076293945)) v_sub_cond_averse:condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 3.4658000469207764)) v_conflictDomain:condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 5.855800151824951)) v_sub_cond_rewardLOG:conflictDomain:condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 5.573299884796143)) v_sub_cond_averse:conflictDomain:condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 4.914400100708008)) Link: identity Explicit bounds: (-10.0, 10.0)

Though, you can see that the scaling of the sigmas at the individual level isn't done automatically. Would be good to fix this and/or mention all of the above in the documentation).

— Reply to this email directly, view it on GitHub https://github.com/lnccbrown/HSSM/issues/431, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAG7TFDZPNWOYCUVMQWYYA3ZCIS6BAVCNFSM6AAAAABHWMSIFSVHI2DSMVQWIX3LMV43ASLTON2WKOZSGI4TKNZVHA2TANI . You are receiving this because you are subscribed to this thread.Message ID: @.***>

gingjehli commented 1 month ago

To clarify: Are you saying that setting hierarchical=True only defines intercepts at the individual level? If so, it seems that any additional regressors in the equation are only estimated at the group level. This implies that activating hierarchical=True doesn’t truly establish a hierarchical model (unless the model is limited strictly to intercepts in your regression specifications).

Given that “hierarchical=True” seems more complicated to implement, may I suggest: First, it would already be super useful to be able to specify priors of classes of coefficients (rather than for each coefficient separately) --> I’ve submitted a separate feature request for this. Second, a warning message would be great when not all priors for the coefficients in a specified regression are set.

Nadja R. Ging-Jehli, PhD Postdoctoral Researcher in Computational Psychiatry & Cognitive Neuroscience Brown University Carney Institute for Brain Science, Department of Cognitive, Linguistic & Psychological Sciences Chief Executive Officer at BGBehavior LLC (614) 736-7755 | @.**@.> | www.gingjehli.comhttp://www.gingjehli.com/ [A button with "Hear my name" text for name playback in email signature]https://www.name-coach.com/nadja-gingjehli

From: Michael J. Frank @.> Sent: Tuesday, May 14, 2024 12:01 PM To: lnccbrown/HSSM @.> Cc: Nadja R. Ging Jehli @.>; Author @.> Subject: Re: [lnccbrown/HSSM] Custom Group-Level Priors Not Propagating to Individual-Level Priors When Hierarchical=True (Issue #431)

I agree this needs to be clearer in the docs - Hierarchical=True is actually only applied to the intercepts. My original request for this argument was so that we can have one setting that allows the entire model to be hierarchical without having to specify the whole notation to ensure it is all hierarchical with all the corresponding "| participant_id"s (so that it would default to HDDM where everything is hierarchical). But it turned out that doing so in a way that is fully general for all situations was not trivial so the implementation started with just making the intercepts hierarchical and not all regression terms. in your case you also specified the regression terms to have the hierarchical parts (explicitly with | participant_id) so it added those but then the priors aren't propagated.

Michael

On Tue, May 14, 2024 at 11:14 AM Nadja Ging-Jehli @.<mailto:@.>> wrote:

When the 'hierarchical=True' option is set and custom priors are specified, the group-level priors do not automatically apply to the individual-level priors. Instead, individual-level priors continue to default to their predefined settings. Would be good to fix this a/o mention it in the documentation. For now, it seems that all priors for all levels need to be set individually (AND hierarchical has to be set to False) if one wants to fully leverage the use of self-specified priors.

Here is an example:

sm1_full2 = hssm.HSSM( data=data, model="angle", hierarchical=True, prior_settings=None, loglik_kind="approx_differentiable", include=[ { "name": "v", "formula": "v ~ 1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2 + (1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2 | participant_id)", "prior": { "Intercept": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, # Using a "large" prior for covariates "sub_cond_averse": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, }, "link": "identity", "bounds": (-10.0, 10.0) },], ) sm1_full2

Below is a snippet of the corresponding output - the participant-level priors are still the default priors. This is misleading:

Formula: v ~ 1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2

  • (1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2 | participant_id) Priors: v_Intercept ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse ~ Normal(mu: 0.0, sigma: 2.5) v_conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_1|participant_id ~ Normal(mu: 0.0, sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_rewardLOG|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_averse|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_conflictDomain|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_rewardLOG:conflictDomain|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_averse:conflictDomain|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_rewardLOG:condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_averse:condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_conflictDomain:condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_rewardLOG:conflictDomain:condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_averse:conflictDomain:condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) Link: identity Explicit bounds: (-10.0, 10.0)

So, to fully propagate the self-defined priors, 'hierarchical=True' had to be set to false as shown below:

sm1_full1 = hssm.HSSM( data=data, model="angle", hierarchical=False, prior_settings=None, loglik_kind="approx_differentiable", include=[ { "name": "v", "formula": "v ~ 1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2 + (1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2 | participant_id)", "prior": { "Intercept": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, # Using a "large" prior for covariates "sub_cond_averse": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, }, "link": "identity", "bounds": (-10.0, 10.0) },], ) sm1_full1

In this case, the priors are now correctly propagated to the participant layer :)

Here an output snippet:

v: Formula: v ~ 1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2

  • (1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2 | participant_id) Priors: v_Intercept ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse ~ Normal(mu: 0.0, sigma: 2.5) v_conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_1|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 2.5)) v_sub_cond_rewardLOG|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 2.5373001098632812)) v_sub_cond_averse|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 2.486599922180176)) v_conflictDomain|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 5.008999824523926)) v_sub_cond_rewardLOG:conflictDomain|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 4.127600193023682)) v_sub_cond_averse:conflictDomain|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 3.574899911880493)) v_condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 5.0)) v_sub_cond_rewardLOG:condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 3.5920000076293945)) v_sub_cond_averse:condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 3.4658000469207764)) v_conflictDomain:condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 5.855800151824951)) v_sub_cond_rewardLOG:conflictDomain:condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 5.573299884796143)) v_sub_cond_averse:conflictDomain:condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 4.914400100708008)) Link: identity Explicit bounds: (-10.0, 10.0)

Though, you can see that the scaling of the sigmas at the individual level isn't done automatically. Would be good to fix this and/or mention all of the above in the documentation).

— Reply to this email directly, view it on GitHub https://github.com/lnccbrown/HSSM/issues/431, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAG7TFDZPNWOYCUVMQWYYA3ZCIS6BAVCNFSM6AAAAABHWMSIFSVHI2DSMVQWIX3LMV43ASLTON2WKOZSGI4TKNZVHA2TANI . You are receiving this because you are subscribed to this thread.Message ID: @.<mailto:@.>>

— Reply to this email directly, view it on GitHubhttps://github.com/lnccbrown/HSSM/issues/431#issuecomment-2110598624, or unsubscribehttps://github.com/notifications/unsubscribe-auth/APOZ6A4W6TTOHAHMOD4AHHDZCIYKZAVCNFSM6AAAAABHWMSIFSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMJQGU4TQNRSGQ. You are receiving this because you authored the thread.Message ID: @.**@.>>

frankmj commented 1 month ago

yes exactly hierarchical=True for now does not actually implement a fully hierarchical model when there are regressors and it was just the first thing they did to make it possible for a simple model without regressors. It will be fixed or revised or removed altogether if it turns out we just need to have everything be spelled out for each model by the user (in which case we just make sure that we have very good documentation to show how to do that).

I also agree with your other suggestions

On Tue, May 14, 2024 at 12:14 PM Nadja Ging-Jehli @.***> wrote:

To clarify: Are you saying that setting hierarchical=True only defines intercepts at the individual level? If so, it seems that any additional regressors in the equation are only estimated at the group level. This implies that activating hierarchical=True doesn’t truly establish a hierarchical model (unless the model is limited strictly to intercepts in your regression specifications).

Given that “hierarchical=True” seems more complicated to implement, may I suggest: First, it would already be super useful to be able to specify priors of classes of coefficients (rather than for each coefficient separately) --> I’ve submitted a separate feature request for this. Second, a warning message would be great when not all priors for the coefficients in a specified regression are set.

Nadja R. Ging-Jehli, PhD Postdoctoral Researcher in Computational Psychiatry & Cognitive Neuroscience Brown University Carney Institute for Brain Science, Department of Cognitive, Linguistic & Psychological Sciences Chief Executive Officer at BGBehavior LLC (614) 736-7755 | @.**@.> | www.gingjehli.com< http://www.gingjehli.com/> [A button with "Hear my name" text for name playback in email signature]< https://www.name-coach.com/nadja-gingjehli>

From: Michael J. Frank @.> Sent: Tuesday, May 14, 2024 12:01 PM To: lnccbrown/HSSM @.> Cc: Nadja R. Ging Jehli @.>; Author @.> Subject: Re: [lnccbrown/HSSM] Custom Group-Level Priors Not Propagating to Individual-Level Priors When Hierarchical=True (Issue #431)

I agree this needs to be clearer in the docs - Hierarchical=True is actually only applied to the intercepts. My original request for this argument was so that we can have one setting that allows the entire model to be hierarchical without having to specify the whole notation to ensure it is all hierarchical with all the corresponding "| participant_id"s (so that it would default to HDDM where everything is hierarchical). But it turned out that doing so in a way that is fully general for all situations was not trivial so the implementation started with just making the intercepts hierarchical and not all regression terms. in your case you also specified the regression terms to have the hierarchical parts (explicitly with | participant_id) so it added those but then the priors aren't propagated.

Michael

On Tue, May 14, 2024 at 11:14 AM Nadja Ging-Jehli @.<mailto:@.>>

wrote:

When the 'hierarchical=True' option is set and custom priors are specified, the group-level priors do not automatically apply to the individual-level priors. Instead, individual-level priors continue to default to their predefined settings. Would be good to fix this a/o mention it in the documentation. For now, it seems that all priors for all levels need to be set individually (AND hierarchical has to be set to False) if one wants to fully leverage the use of self-specified priors.

Here is an example:

sm1_full2 = hssm.HSSM( data=data, model="angle", hierarchical=True, prior_settings=None, loglik_kind="approx_differentiable", include=[ { "name": "v", "formula": "v ~ 1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2 + (1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2 | participant_id)", "prior": { "Intercept": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, # Using a "large" prior for covariates "sub_cond_averse": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, }, "link": "identity", "bounds": (-10.0, 10.0) },], ) sm1_full2

Below is a snippet of the corresponding output - the participant-level priors are still the default priors. This is misleading:

Formula: v ~ 1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2

  • (1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2 | participant_id) Priors: v_Intercept ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse ~ Normal(mu: 0.0, sigma: 2.5) v_conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_1|participant_id ~ Normal(mu: 0.0, sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_rewardLOG|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_averse|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_conflictDomain|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_rewardLOG:conflictDomain|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_averse:conflictDomain|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_rewardLOG:condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_averse:condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_conflictDomain:condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_rewardLOG:conflictDomain:condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) v_sub_cond_averse:conflictDomain:condition2|participant_id ~ Normal(mu: Normal(mu: 0.0, sigma: 0.25), sigma: Weibull(alpha: 1.5, beta: 0.30000001192092896)) Link: identity Explicit bounds: (-10.0, 10.0)

So, to fully propagate the self-defined priors, 'hierarchical=True' had to be set to false as shown below:

sm1_full1 = hssm.HSSM( data=data, model="angle", hierarchical=False, prior_settings=None, loglik_kind="approx_differentiable", include=[ { "name": "v", "formula": "v ~ 1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2 + (1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2 | participant_id)", "prior": { "Intercept": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, # Using a "large" prior for covariates "sub_cond_averse": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_rewardLOG:conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "sub_cond_averse:conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "conflictDomain:condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "conflictDomain": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, "condition2": {"name": "Normal", "mu": 0.0, "sigma": 2.5}, }, "link": "identity", "bounds": (-10.0, 10.0) },], ) sm1_full1

In this case, the priors are now correctly propagated to the participant layer :)

Here an output snippet:

v: Formula: v ~ 1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2

  • (1 + (sub_cond_rewardLOG + sub_cond_averse)conflictDomaincondition2 | participant_id) Priors: v_Intercept ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse ~ Normal(mu: 0.0, sigma: 2.5) v_conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:conflictDomain ~ Normal(mu: 0.0, sigma: 2.5) v_condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_rewardLOG:conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_sub_cond_averse:conflictDomain:condition2 ~ Normal(mu: 0.0, sigma: 2.5) v_1|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 2.5)) v_sub_cond_rewardLOG|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 2.5373001098632812)) v_sub_cond_averse|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 2.486599922180176)) v_conflictDomain|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 5.008999824523926)) v_sub_cond_rewardLOG:conflictDomain|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 4.127600193023682)) v_sub_cond_averse:conflictDomain|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 3.574899911880493)) v_condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 5.0)) v_sub_cond_rewardLOG:condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 3.5920000076293945)) v_sub_cond_averse:condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 3.4658000469207764)) v_conflictDomain:condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 5.855800151824951)) v_sub_cond_rewardLOG:conflictDomain:condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 5.573299884796143)) v_sub_cond_averse:conflictDomain:condition2|participant_id ~ Normal(mu: 0.0, sigma: HalfNormal(sigma: 4.914400100708008)) Link: identity Explicit bounds: (-10.0, 10.0)

Though, you can see that the scaling of the sigmas at the individual level isn't done automatically. Would be good to fix this and/or mention all of the above in the documentation).

— Reply to this email directly, view it on GitHub https://github.com/lnccbrown/HSSM/issues/431, or unsubscribe < https://github.com/notifications/unsubscribe-auth/AAG7TFDZPNWOYCUVMQWYYA3ZCIS6BAVCNFSM6AAAAABHWMSIFSVHI2DSMVQWIX3LMV43ASLTON2WKOZSGI4TKNZVHA2TANI>

. You are receiving this because you are subscribed to this thread.Message ID: @.<mailto:@.>>

— Reply to this email directly, view it on GitHub< https://github.com/lnccbrown/HSSM/issues/431#issuecomment-2110598624>, or unsubscribe< https://github.com/notifications/unsubscribe-auth/APOZ6A4W6TTOHAHMOD4AHHDZCIYKZAVCNFSM6AAAAABHWMSIFSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMJQGU4TQNRSGQ>.

You are receiving this because you authored the thread.Message ID: @.**@.>>

— Reply to this email directly, view it on GitHub https://github.com/lnccbrown/HSSM/issues/431#issuecomment-2110626256, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAG7TFFFEP5VGLQM6SPFBT3ZCI2ABAVCNFSM6AAAAABHWMSIFSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMJQGYZDMMRVGY . You are receiving this because you commented.Message ID: @.***>

digicosmos86 commented 1 month ago

There is an added layer to this - when hierarchical is True and prior_settings is not specified, prior_settings will automatically be turned on to safe, and that might explain why individual-level priors are not propagated correctly. I'll look into why default priors are not overridden with use-specified priors.

Also to clarify, right now the only other effect of turning on hierarchical=True is that any parameters that does not have a formula set would automatically have {param_name} ~ 1 + (1|participant_id). The documentation reflects exactly this. I have seen the 'hierarchical` setting being misused a lot, simply because the name can be misleading. Alex and I will discuss this more in-depth and see what might be the best way forward with this.

Purely from a UX perspective, I think it might be a good idea to have an option to set a global formula, which will allow people to set one formula for all parameters. We can then also allow global priors which allow the same priors to be propagated to every parameter.

AlexanderFengler commented 1 month ago

There is an added layer to this - when hierarchical is True and prior_settings is not specified, prior_settings will automatically be turned on to safe, and that might explain why individual-level priors are not propagated correctly. I'll look into why default priors are not overridden with use-specified priors.

Also to clarify, right now the only other effect of turning on hierarchical=True is that any parameters that does not have a formula set would automatically have {param_name} ~ 1 + (1|participant_id). The documentation reflects exactly this. I have seen the 'hierarchical` setting being misused a lot, simply because the name can be misleading. Alex and I will discuss this more in-depth and see what might be the best way forward with this.

Purely from a UX perspective, I think it might be a good idea to have an option to set a global formula, which will allow people to set one formula for all parameters. We can then also allow global priors which allow the same priors to be propagated to every parameter.

While this will create new pain-points to make it work nicely, I think this is quite a good idea :).