Closed SaschaFroelich closed 3 months ago
Hi Sascha,
Can you also include a printout of the model? You can do print(model_reg_v_angle_hier)
and `print(model_reg_v_angle_hier.model)
. It would be helpful to know if hssm
built the model correctly.
One other thing you can try is to specify C(jokercondition)
in the formulas to indicate that jokercondition
is a categorical variable. I am not 100% clear how the categorical
parameter works in bambi, so explicitly using C()
notation to indicate categorical variables provides one additional guarantee.
Thanks! Paul
Hi Paul,
print(model_reg_v_angle_hier)
Hierarchical Sequential Sampling Model
Model: angle
Response variable: rt,response
Likelihood: approx_differentiable
Observations: 611
Parameters:
v:
Formula: v ~ 1 + jokercondition
Priors:
v_Intercept ~ Normal(mu: 1.0, sigma: 2.0, initval: 1.0)
v_jokercondition ~ Normal(mu: 0.0, sigma: 0.25)
Link: identity
Explicit bounds: (-3.0, 3.0)
a:
Prior: Uniform(lower: 0.3, upper: 3.0)
Explicit bounds: (0.3, 3.0)
z:
Formula: z ~ 1 + jokercondition
Priors:
z_Intercept ~ Uniform(lower: 0.3, upper: 0.7, initval: 0.5)
z_jokercondition ~ Normal(mu: 0.0, sigma: 0.25)
Link: identity
Explicit bounds: (0.1, 0.9)
t:
Prior: Uniform(lower: 0.001, upper: 2.0)
Explicit bounds: (0.001, 2.0)
theta:
Prior: Uniform(lower: -0.1, upper: 1.3)
Explicit bounds: (-0.1, 1.3)
Lapse probability: 0.05
Lapse distribution: Uniform(lower: 0.0, upper: 10.0)
Specifying C(jokercondition)
does not seem to make any difference.
Best, Sascha
How about print(model_reg_v_angle_hier.model)
?
Oh sorry, I forgot! Here it is:
print(model_reg_v_angle_hier.model)
Formula: c(rt, response) ~ 1 + jokercondition
z ~ 1 + jokercondition
Family: SSM Family
Link: v = identity
z = identity
Observations: 611
Priors:
target = v
Common-level effects
Intercept ~ Normal(mu: 1.0, sigma: 2.0, initval: 1.0)
jokercondition ~ Normal(mu: 0.0, sigma: 0.25)
Auxiliary parameters
theta ~ Uniform(lower: -0.1, upper: 1.3)
p_outlier ~ 0.05
a ~ Uniform(lower: 0.3, upper: 3.0)
t ~ Uniform(lower: 0.001, upper: 2.0)
target = z
Common-level effects
z_Intercept ~ Uniform(lower: 0.3, upper: 0.7, initval: 0.5)
z_jokercondition ~ Normal(mu: 0.0, sigma: 0.25)
------
* To see a plot of the priors call the .plot_priors() method.
* To see a summary or plot of the posterior pass the object returned by .fit() to az.summary() or az.plot_trace()
Hi Sascha,
Thank you for providing the information! It seems to me that the formulas specified are still the same in these outputs. Do you mean that after specifying the formulas for v and z as v ~ 1 + C(jokercondition)
and z ~ 1 + C(jokercondition)
, nothing has changed?
A few other things that might be helpful to try:
jokercondition
in strings, which will ensure that HSSM will not treat it as continuous.jokercondition
exist in the InferenceData
object. I'd suspect that they probably don't since the summary doesn't show it, but it might still be a good idea to check.prior_settings
to None
, for a simple model that you are trying outPlease let me know if any of these helps
Hi Paul,
thanks for getting back. Your suggestion to look in the InferenceData
object was spot on. Indeed, z_jokercondition
was there, and I can print its inference results with print(az.summary(infer_data_trace, var_names = [ 'z_jokercondition']))
. Simply printing the summary print(az.summary(infer_data_trace))
puts it at the very end (after the inferences for the individual trials), which was new and confusing. This is something I should have been able to solve by myself, so my apologies for wasting your time. And thank you for your assistance.
Best, Sascha
Hi Paul,
thanks for getting back. Your suggestion to look in the
InferenceData
object was spot on. Indeed,z_jokercondition
was there, and I can print its inference results withprint(az.summary(infer_data_trace, var_names = [ 'z_jokercondition']))
. Simply printing the summaryprint(az.summary(infer_data_trace))
puts it at the very end (after the inferences for the individual trials), which was new and confusing. This is something I should have been able to solve by myself, so my apologies for wasting your time. And thank you for your assistance.Best, Sascha
Hi Sascha,
I am glad everything worked! In fact, we have a wrapped .summary()
method for the HSSM class that has some built-in filters to help folks get rid of the deterministics in the InferenceData. You can just call model.summary()
after sampling. You don't even need to import arviz
:)
I updated HSSM from 0.2.1 to 0.2.3 (and numpyro: 0.14.0 -> 0.15.0, pymc 5.15.1 -> 5.16.1). Now the model omits inference of the different jokerconditions of the z parameter. Here's the model:
This is how the inference result would look before (only 10 samples for testing, so no convergence):
This is how the inference comes out now (again only 10 samples):
And then results for the individual
z
in each trial. So basically,z_Intercept
,z_jokercondition[2.0]
andz_jokercondition[3.0]
are suddenly not inferred anymore?Best, Sascha