Closed sreedta8 closed 2 years ago
After about 1 hour and 15 minutes I see a message about step size
.
Info: Found initial step size
│ ϵ = 9.765625e-5
└ @ Turing.Inference /home/sreedta/.julia/packages/Turing/GxgQ1/src/inference/hmc.jl:188
I feel as if the default values I'm using for number of adaptations
and target acceptance
are impeding this model. Please see my two questions above
I would be happy to share my data set if anyone else wants to try the hierarchical model I'm testing.
I stopped the original sampling command and restarted the process again. This time the initial step size was found quickly. However the ETA (see below) to complete the sampling is 2 days, 6:29:51. I have no doubt I'm doing something awfully wrong.
Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /home/sreedta/.julia/packages/AdvancedHMC/51xgc/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /home/sreedta/.julia/packages/AdvancedHMC/51xgc/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /home/sreedta/.julia/packages/AdvancedHMC/51xgc/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /home/sreedta/.julia/packages/AdvancedHMC/51xgc/src/hamiltonian.jl:47
┌ Warning: The current proposal will be rejected due to numerical error(s).
│ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false)
└ @ AdvancedHMC /home/sreedta/.julia/packages/AdvancedHMC/51xgc/src/hamiltonian.jl:47
┌ Info: Found initial step size
│ ϵ = 0.000390625
└ @ Turing.Inference /home/sreedta/.julia/packages/Turing/GxgQ1/src/inference/hmc.jl:188
Sampling: 2%|▋ | ETA: 2 days, 6:29:51
I figured out how to specify number of adaptations
and target acceptance
using
chn = sample(model, NUTS(1000, 0.98), 3_000, progress=true);
With this specification, the sampling ETA is only a little better:
Info: Found initial step size
│ ϵ = 0.000390625
└ @ Turing.Inference /home/sreedta/.julia/packages/Turing/GxgQ1/src/inference/hmc.jl:188
Sampling: 4%|█▊ | ETA: 1 days, 12:59:29
Would love some feedback on how I can improve the posterior sampling time to be in line with brms
and bambi
@sreedta8 TuringGLM.jl
currently only supports single random-intercept models, i.e. (1 | grouping_variable)
.
Have you tried coding it straight up using Turing? You could do something like:
@model function varying_slope_ncp(X, idx, y, calls80; n_gr=length(unique(idx)), predictors=size(X, 2))
# priors
α ~ Normal(mean(y), 2.5 * std(y)) # population-level intercept
β ~ filldist(Normal(0, 2), predictors) # population-level coefficients
σ ~ Exponential(1 / std(y)) # residual SD
# prior for variance of random slopes
# usually requires thoughtful specification
τ ~ truncated(Cauchy(0, 2), 0, Inf) # group-level SDs slopes
zⱼ ~ filldist(Normal(0, 1), n_gr) # NCP group-level slopes
# likelihood
ŷ = α .+ X * β .+ (zⱼ[idx] .* τ .* calls80)
y ~ MvNormal(ŷ, σ)
end;
This is the non-centered parameterization.
where:
X
: is a matrix for all variables except nrx
and mdmidc
, i.e. Matrix(select(df, Not([:nrx, :mdmidc])))
idx
: a vector of ids for the grouping variable in Integer values, i.e. df[:, :mdmidc_omt]
. You can convert them to Int64
using CategoricalArrays.jl
's levelcode
function.y
: a vector of values from the dependent variable df[:, :nrx]
calls80
: a vector of values from the calls80
variable df[:, :calls80]
Take a look at this tutorial: https://storopoli.github.io/Bayesian-Julia/pages/10_multilevel_models
@storopoli thank you so much for your feedback. So far I have only tried one random intercept via TuringGLM using (1 | mdmidc)
. I will review the link and try a direct specification in Turing and let you know of the results.
What does predictors(X, 2)
specify?
@storopoli I have specified the following below based on your input and the example models. I got the initial step size quickly. But after 2.5 hours, I still do not see the sampling progress bar (I set Turing.setprogress!(true);
)
@model function varying_slope(X, idx, y, calls80; n_gr=length(unique(idx)), predictors=size(X, 6))
# priors
α ~ Normal(mean(y), 2.5 * std(y)) # population-level intercept
β ~ filldist(Normal(0, 2), predictors) # population-level coefficients
σ ~ Exponential(1 / std(y)) # residual SD
# prior for variance of random slopes
# usually requires thoughtful specification
τ ~ truncated(Cauchy(0, 2), 0, Inf) # group-level SDs slopes
zⱼ ~ filldist(Normal(0, 1), n_gr) # NCP group-level slopes
# likelihood
ŷ = α .+ X * β .+ (zⱼ[idx] .* τ .* calls80)
y ~ MvNormal(ŷ, σ)
end;
X = Matrix(select(hcp, ([:calls80, :samples10, :lle50, :spe80, :copay10, :fto10]))) # matrix of predictors
idx = hcp[:, :mdmid]
y = hcp[:, :nrx]
calls80 = hcp[:, :calls80]
Turing.setprogress!(true);
model_slope = varying_slope(X, idx, y, calls80; n_gr=length(unique(idx)), predictors=size(X, 2))
chain_slope = sample(model_slope, NUTS(1_000, 0.98), MCMCThreads(), 3_000, 2)
Output:
┌ Info: [Turing]: progress logging is enabled globally └ @ Turing /home/sreedta/.julia/packages/Turing/GxgQ1/src/Turing.jl:22 ┌ Info: [AdvancedVI]: global PROGRESS is set as true └ @ AdvancedVI /home/sreedta/.julia/packages/AdvancedVI/W2zsz/src/AdvancedVI.jl:15 ┌ Warning: Only a single thread available: MCMC chains are not sampled in parallel └ @ AbstractMCMC /home/sreedta/.julia/packages/AbstractMCMC/fnRmh/src/sample.jl:291 ┌ Warning: The current proposal will be rejected due to numerical error(s). │ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false) └ @ AdvancedHMC /home/sreedta/.julia/packages/AdvancedHMC/51xgc/src/hamiltonian.jl:47 ┌ Warning: The current proposal will be rejected due to numerical error(s). │ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false) └ @ AdvancedHMC /home/sreedta/.julia/packages/AdvancedHMC/51xgc/src/hamiltonian.jl:47 ┌ Warning: The current proposal will be rejected due to numerical error(s). │ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false) └ @ AdvancedHMC /home/sreedta/.julia/packages/AdvancedHMC/51xgc/src/hamiltonian.jl:47 ┌ Warning: The current proposal will be rejected due to numerical error(s). │ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false) └ @ AdvancedHMC /home/sreedta/.julia/packages/AdvancedHMC/51xgc/src/hamiltonian.jl:47 ┌ Warning: The current proposal will be rejected due to numerical error(s). │ isfinite.((θ, r, ℓπ, ℓκ)) = (true, false, false, false) └ @ AdvancedHMC /home/sreedta/.julia/packages/AdvancedHMC/51xgc/src/hamiltonian.jl:47 ┌ Info: Found initial step size │ ϵ = 0.000390625 └ @ Turing.Inference /home/sreedta/.julia/packages/Turing/GxgQ1/src/inference/hmc.jl:188
It has been at this output screen now for 3 hours
If you run with more than 1 thread, the progress only shows the fraction of threads that are fully done (which I personally would like to be changed, but that's another issue).
Try running it again but with:
sample(model_slope, NUTS(1_000, 0.98), 3_000)
This should allow you to get a better time estimate. When you have that, and is satisfied with it, I would change it back to the multi-threaded version.
@ChristianMichelsen thanks for the tip. I have tried that and the process still hangs. The sampling is not being completed. So far the only successful sampling I had was when I used TuringGLM, but it took 10.5 hours in total.
I am transferring this issue to Turing.jl
since this is a feature that TuringGLM.jl
does not support and the user is having issues with a raw Turing.jl
model and not one generated by TuringGLM.jl
.
@sreedta8 could you please post the Turing code you've tried and also the Stan generated code from brms:
make_stancode(bf, data)
If you encounter problems take a look at the help of make_stancode
.
Within the next 2-3 hours I will post it. Thanks so much @storopoli
@storopoli Here is the stan
code from brms
. Sorry for the delay. Please note that this is a simplified model, where I'm not controlling for Auto-regressive errors in the time series data. If you would like the stan
code for that do let me know and I will post the same. In addition if you need the data (1000 physicians with 12 months of data for each) let me know so you can test the model on your end as well.
make_stancode(nrx ~ calls80+samples10+lle50+spe80+copay10+fto10+(calls80|mdmid),
+ data = hcp1000, family = gaussian(),
+ warmup = 1000, iter = 4000, chains = 2,
+ control = list(adapt_delta = 0.98), cores=2, seed=123
+ )
// generated with brms 2.17.0
functions {
/* compute correlated group-level effects
* Args:
* z: matrix of unscaled group-level effects
* SD: vector of standard deviation parameters
* L: cholesky factor correlation matrix
* Returns:
* matrix of scaled group-level effects
*/
matrix scale_r_cor(matrix z, vector SD, matrix L) {
// r is stored in another dimension order than z
return transpose(diag_pre_multiply(SD, L) * z);
}
}
data {
int<lower=1> N; // total number of observations
vector[N] Y; // response variable
int<lower=1> K; // number of population-level effects
matrix[N, K] X; // population-level design matrix
// data for group-level effects of ID 1
int<lower=1> N_1; // number of grouping levels
int<lower=1> M_1; // number of coefficients per level
int<lower=1> J_1[N]; // grouping indicator per observation
// group-level predictor values
vector[N] Z_1_1;
vector[N] Z_1_2;
int<lower=1> NC_1; // number of group-level correlations
int prior_only; // should the likelihood be ignored?
}
transformed data {
int Kc = K - 1;
matrix[N, Kc] Xc; // centered version of X without an intercept
vector[Kc] means_X; // column means of X before centering
for (i in 2:K) {
means_X[i - 1] = mean(X[, i]);
Xc[, i - 1] = X[, i] - means_X[i - 1];
}
}
parameters {
vector[Kc] b; // population-level effects
real Intercept; // temporary intercept for centered predictors
real<lower=0> sigma; // dispersion parameter
vector<lower=0>[M_1] sd_1; // group-level standard deviations
matrix[M_1, N_1] z_1; // standardized group-level effects
cholesky_factor_corr[M_1] L_1; // cholesky factor of correlation matrix
}
transformed parameters {
matrix[N_1, M_1] r_1; // actual group-level effects
// using vectors speeds up indexing in loops
vector[N_1] r_1_1;
vector[N_1] r_1_2;
real lprior = 0; // prior contributions to the log posterior
// compute actual group-level effects
r_1 = scale_r_cor(z_1, sd_1, L_1);
r_1_1 = r_1[, 1];
r_1_2 = r_1[, 2];
lprior += student_t_lpdf(Intercept | 3, 32.2, 17.2);
lprior += student_t_lpdf(sigma | 3, 0, 17.2)
- 1 * student_t_lccdf(0 | 3, 0, 17.2);
lprior += student_t_lpdf(sd_1 | 3, 0, 17.2)
- 2 * student_t_lccdf(0 | 3, 0, 17.2);
lprior += lkj_corr_cholesky_lpdf(L_1 | 1);
}
model {
// likelihood including constants
if (!prior_only) {
// initialize linear predictor term
vector[N] mu = Intercept + rep_vector(0.0, N);
for (n in 1:N) {
// add more terms to the linear predictor
mu[n] += r_1_1[J_1[n]] * Z_1_1[n] + r_1_2[J_1[n]] * Z_1_2[n];
}
target += normal_id_glm_lpdf(Y | Xc, mu, b, sigma);
}
// priors including constants
target += lprior;
target += std_normal_lpdf(to_vector(z_1));
}
generated quantities {
// actual population-level intercept
real b_Intercept = Intercept - dot_product(means_X, b);
// compute group-level correlations
corr_matrix[M_1] Cor_1 = multiply_lower_tri_self_transpose(L_1);
vector<lower=-1,upper=1>[NC_1] cor_1;
// extract upper diagonal of correlation matrix
for (k in 1:M_1) {
for (j in 1:(k - 1)) {
cor_1[choose(k - 1, 2) + j] = Cor_1[j, k];
}
}
}
This is a correlated varying effects model. Take a look at the code here for model 14.7 and also chapter 14 of McElreath's Statistical Rethinking book, specially sections 14.1, 14.2, and 14.4.
Thanks @storopoli for these links. I will modify accordingly and retest
How many observations do you have @sreedta8 ? If you're working with a lot of them, you should do:
using ReverseDiff, Memoization
Turing.setadbackend(:reversediff)
Turing.setrdcache(true)
at the beginning of your code. This should be much faster than the default.
Also Stan only does reverse-mode autodiff.
@torfjelde My model data set has 1000 physicians, each having 12 months of data, so the file has 12,000 observations. My model is predicting new prescriptions as a function of 6 marketing channels as 6 fixed population effects. For one of the marketing channels (sales force calls), I'm assessing a random effect at the individual physician level. I will definitely test with the changes you recommended. Thanks for your help.
@storopoli thanks for the tip. I will test under the new commands and share what happens here.
With 12,000 observations I'm seeing ~500μs per gradient computation using ReverseDiff on my laptop, which in turn means that you're looking at ~500ms per NUTS iteration (in the worst-case scenario; I'd expect it to be much faster than this on average but your acceptance-target is very high which might lead to very conservative step size and thus saturation of the tree-depth on almost every iteration). For 1000 + 4000 iterations this should then upper-bound you at 2500s ~ 40mins.
EDIT: Some additional info.
julia> versioninfo()
Julia Version 1.8.0-rc1
Commit 6368fdc656 (2022-05-27 18:33 UTC)
Platform Info:
OS: Linux (x86_64-pc-linux-gnu)
CPU: 12 × Intel(R) Core(TM) i7-10710U CPU @ 1.10GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-13.0.1 (ORCJIT, skylake)
Threads: 1 on 12 virtual cores
though stuff like Julia version shouldn't really matter here.
@torfjelde In the pharma sector, some of the marketing channels tend to have high multi-collinearity, hence the use of high target acceptance. I just finished my sampling run using 2 Cores and 1 Chain in approximately 9 minutes. Thank you so much for your suggestion!!! I will rerun now with an additional core (my laptop has 4 cores) and I will also run a model with an auto-regressive error term of lag 1 included. Now that I know the trick, I will be experimenting on different models and compare brms
, pymc
and Turing/TuringGLM
.
@storopoli Thank you so much for your patient help throughout the process - the recommendation to use ReverseDiff
should be part of documentation for potential brms/R
and pymc/Python
users of Turing
so that these missteps are avoided.
That's awesome!:) Glad to be of help. Worth noting that using 2 threads
doesn't matter if you're just sampling 1 chain. MCMCThreads
is
effectively just calling sample
separately on different threads, not
using threading to speed up anything.
On Mon, Jul 4, 2022, 6:32 PM sreedta8 @.***> wrote:
@torfjelde https://github.com/torfjelde In the pharma sector, some of the marketing channels tend to have high multi-collinearity, hence the use of high target acceptance. I just finished my sampling run using 2 Cores and 1 Chain in approximately 9 minutes. Thank you so much for your suggestion!!! I will rerun now with an additional core (my laptop has 4 cores) and I will also run a model with an auto-regressive error term of lag 1 included. Now that I know the trick, I will be experimenting on different models and compare brms, pymc and Turing/TuringGLM. @storopoli https://github.com/storopoli Thank you so much for your patient help throughout the process - the recommendation to use ReverseDiff should be part of documentation for potential brms/R and pymc/Python users of Turing so that these missteps are avoided.
— Reply to this email directly, view it on GitHub https://github.com/TuringLang/Turing.jl/issues/1851#issuecomment-1174026302, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACUPZZBNVKCTFM4MGVUPWLTVSMNZLANCNFSM52Q6FHOQ . You are receiving this because you were mentioned.Message ID: @.***>
Also, usage of ReverseDiff is documented on the Turing website ynde the performance tips:)
On Mon, Jul 4, 2022, 6:36 PM Tor Erlend Fjelde @.***> wrote:
That's awesome!:) Glad to be of help. Worth noting that using 2 threads doesn't matter if you're just sampling 1 chain.
MCMCThreads
is effectively just callingsample
separately on different threads, not using threading to speed up anything.On Mon, Jul 4, 2022, 6:32 PM sreedta8 @.***> wrote:
@torfjelde https://github.com/torfjelde In the pharma sector, some of the marketing channels tend to have high multi-collinearity, hence the use of high target acceptance. I just finished my sampling run using 2 Cores and 1 Chain in approximately 9 minutes. Thank you so much for your suggestion!!! I will rerun now with an additional core (my laptop has 4 cores) and I will also run a model with an auto-regressive error term of lag 1 included. Now that I know the trick, I will be experimenting on different models and compare brms, pymc and Turing/TuringGLM. @storopoli https://github.com/storopoli Thank you so much for your patient help throughout the process - the recommendation to use ReverseDiff should be part of documentation for potential brms/R and pymc/Python users of Turing so that these missteps are avoided.
— Reply to this email directly, view it on GitHub https://github.com/TuringLang/Turing.jl/issues/1851#issuecomment-1174026302, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACUPZZBNVKCTFM4MGVUPWLTVSMNZLANCNFSM52Q6FHOQ . You are receiving this because you were mentioned.Message ID: @.***>
thank you for both of the tips @torfjelde - you literally made my long weekend end on a bright note (today is July 4th, US Independence day). Past few days have been quite crazy not knowing what I was doing incorrectly / inefficiently.
Great I am glad that we got this sorted out. I will close this issue for now, but feel free to reopen if necessary.
PS: Thanks @torfjelde.
A bit late to the party but I think in addition to choosing a different AD backend there are some things one could improve in the suggested model here (not sure how and to what extent they will lead to additional performance improvements):
Have you tried coding it straight up using Turing? You could do something like:
@model function varying_slope_ncp(X, idx, y, calls80; n_gr=length(unique(idx)), predictors=size(X, 2)) # priors α ~ Normal(mean(y), 2.5 * std(y)) # population-level intercept β ~ filldist(Normal(0, 2), predictors) # population-level coefficients σ ~ Exponential(1 / std(y)) # residual SD # prior for variance of random slopes # usually requires thoughtful specification τ ~ truncated(Cauchy(0, 2), 0, Inf) # group-level SDs slopes zⱼ ~ filldist(Normal(0, 1), n_gr) # NCP group-level slopes # likelihood ŷ = α .+ X * β .+ (zⱼ[idx] .* τ .* calls80) y ~ MvNormal(ŷ, σ) end;
I would avoid any redundant information in the model arguments since it means you could provide (explicitly or implicitly without even realizing) inconsistent arguments. E.g., I don't think there's a performance improvement by making predictors
a keyword argument. The n_gr
computation could be avoided and made be more efficient by using CategoricalArrays. So I would suggest defining
@model function varying_slope(X, idx::CategoricalVector, y, calls80)
predictors = size(X, 1)
n_gr = length(levels(idx))
...
It seems quite inefficient to recompute std(y)
- inside of the model but even more so in every step of the algorithm. The same with mean(y)
. So I would suggest either
normalizing y
before inference and defining
@model function varying_slope(X, idx::CategoricalVector, calls80)
predictors = size(X, 1)
n_gr = length(levels(idx))
# priors
α ~ Normal(0, 2.5) # population-level intercept
β ~ filldist(Normal(0, 2), predictors) # population-level coefficients
σ ~ Exponential(1) # residual SD
...
Instead of a @model
with y
you could then only define a convenience function
function varying_slope(X, idx::CategoricalVector, y, calls80)
# normalize y (could also just use `StatsBase.standardize(StatsBase.ZScoreTransform, y)`
m = mean(y)
s = std(y; mean=m)
z = (y .- m) ./ s
return varying_slope(X, idx, calls80) | (; y=z)
end
This will perform the normalization of y
only once and condition the model on the resulting value. Another advantage of this setup is that it is easy to sample from the prior and generate samples from y
by sampling from the unconditioned model, without having to deal with the missing
etc. mess.
or offloading the computation of mean(y)
and std(y)
by adding them to the model arguments such as e.g.
@model function varying_slope(X, idx::CategoricalVector, calls80; mean_y::Real=0, std_y::Real=1)
predictors = size(X, 1)
n_gr = length(levels(idx))
# priors
α ~ Normal(mean_y, 2.5 * std_y) # population-level intercept
β ~ filldist(Normal(0, 2), predictors) # population-level coefficients
σ ~ Exponential(1 / std_y) # residual SD
...
Again y
itself is omitted from the model arguments on purpose to be able to generate samples of y
from the prior easily (that's also why I added default values for mean_y
and std_y
in the sketch). Again one could add a convenience method such as
function varying_slope(X, idx::CategoricalVector, y, calls80)
return varying_slope(X, idx, calls80; mean_y=mean(y), std_y=std(y; mean=mean_y)) | (; y)
end
This would ensure that the computation of mean(y)
and std(y)
is only performed once when the conditioned model is constructed. Also it would still be possible to sample from the unconditioned model without missing
etc. - but in contrast to the alternative above one has to be careful that mean_y
and std_y
are specified as desired.
It should be better to use truncated(Cauchy(0, 2); lower=0)
since this 1) avoids weird type promotions and 2) avoids AD issues (of e.g. ForwardDiff) with Inf
(otherwise sometimes one has to use the so-called NaN-safe mode in ForwardDiff which can be activated with Preferences, as explained in the ForwardDiff docs).
The MvNormal
constructor in the example is deprecated. Instead one should use MvNormal(ŷ, σ^2 * I)
.
@devmotion this is extremely useful information for some one new to Julia and Turing like myself. These are some of the code efficiencies I had come to use in Stan and PyMC. Complete newbie with Julia. Thanks a ton for your detailed notes. I will specify alternate models directly in Turing and document the efficiency gains in due time.
If it is not confidential stuff you are working on, @sreedta8, it would be great to see the final results! Both code and some plots.
I always feel the least intelligent people in the room around Turing devs and I love it. Thank you @devmotion. I always learn amazing things from you.
@ChristianMichelsen Yes I can definitely share both final code with plots.
@storopoli @torfjelde @devmotion I wanted to apply the ReverseDiff
backend when using TuringGLM as well. I get an error which I did not get when I did not use ReverseDiff
. Here is the code and the error:
using CSV, DataFrames, Turing, TuringGLM, StatsPlots, ArviZ
using ReverseDiff, Memoization
Turing.setadbackend(:reversediff)
Turing.setrdcache(true)
hcp = CSV.read("/home/sreedta/Documents/bayes/hcp_julia_1000.csv", DataFrame)
hcpdata = hcp[:, ["mdmidc", "nrx", "calls80", "samples10", "lle50", "spe80", "copay10", "fto10"]]
jtgmod1 = @formula(nrx ~ calls80 + samples10 + lle50 + spe80 + copay10 + fto10 + (calls80 | mdmidc))
model = turing_model(jtgmod1, hcpdata);
chn = sample(model, NUTS(1_000, 0.98), 4_000, progress=true);
here the error occurs. I did not see this before
Error Output:
UndefVarError: τ not defined
Stacktrace:
[1] macro expansion
@ ~/.julia/packages/DynamicPPL/R7VK9/src/compiler.jl:539 [inlined]
[2] (::TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}})(__model__::DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, __varinfo__::DynamicPPL.UntypedVarInfo{DynamicPPL.Metadata{Dict{AbstractPPL.VarName, Int64}, Vector{Distribution}, Vector{AbstractPPL.VarName}, Vector{Real}, Vector{Set{DynamicPPL.Selector}}}, Float64}, __context__::DynamicPPL.SamplingContext{DynamicPPL.SampleFromUniform, DynamicPPL.DefaultContext, Random._GLOBAL_RNG}, y::Vector{Float64}, X::Matrix{Float64}, predictors::Int64, idxs::Vector{Int64}, n_gr::Int64, intercept_ranef::Vector{String}, μ_X::Int64, σ_X::Int64, prior::CustomPrior, residual::Float64)
@ TuringGLM ~/.julia/packages/TuringGLM/s2Pou/src/turing_model.jl:190
[3] macro expansion
@ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:493 [inlined]
[4] _evaluate!!
@ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:476 [inlined]
[5] evaluate_threadunsafe!!
@ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:451 [inlined]
[6] evaluate!!
@ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:404 [inlined]
[7] evaluate!!(model::DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, rng::Random._GLOBAL_RNG, varinfo::DynamicPPL.UntypedVarInfo{DynamicPPL.Metadata{Dict{AbstractPPL.VarName, Int64}, Vector{Distribution}, Vector{AbstractPPL.VarName}, Vector{Real}, Vector{Set{DynamicPPL.Selector}}}, Float64}, sampler::DynamicPPL.SampleFromUniform, context::DynamicPPL.DefaultContext)
@ DynamicPPL ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:415
[8] (::DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext})(::Random._GLOBAL_RNG, ::Vararg{Any})
@ DynamicPPL ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:375
[9] VarInfo
@ ~/.julia/packages/DynamicPPL/R7VK9/src/varinfo.jl:127 [inlined]
[10] VarInfo
@ ~/.julia/packages/DynamicPPL/R7VK9/src/varinfo.jl:126 [inlined]
[11] step(rng::Random._GLOBAL_RNG, model::DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, spl::DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}; resume_from::Nothing, init_params::Nothing, kwargs::Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:nadapts,), Tuple{Int64}}})
@ DynamicPPL ~/.julia/packages/DynamicPPL/R7VK9/src/sampler.jl:86
[12] macro expansion
@ ~/.julia/packages/AbstractMCMC/fnRmh/src/sample.jl:120 [inlined]
[13] macro expansion
@ ~/.julia/packages/ProgressLogging/6KXlp/src/ProgressLogging.jl:328 [inlined]
[14] (::AbstractMCMC.var"#21#22"{Bool, String, Nothing, Int64, Int64, Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:nadapts,), Tuple{Int64}}}, Random._GLOBAL_RNG, DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, Int64, Int64})()
@ AbstractMCMC ~/.julia/packages/AbstractMCMC/fnRmh/src/logging.jl:12
[15] with_logstate(f::Function, logstate::Any)
@ Base.CoreLogging ./logging.jl:511
[16] with_logger(f::Function, logger::LoggingExtras.TeeLogger{Tuple{LoggingExtras.EarlyFilteredLogger{ConsoleProgressMonitor.ProgressLogger, AbstractMCMC.var"#1#3"{Module}}, LoggingExtras.EarlyFilteredLogger{Base.CoreLogging.SimpleLogger, AbstractMCMC.var"#2#4"{Module}}}})
@ Base.CoreLogging ./logging.jl:623
[17] with_progresslogger(f::Function, _module::Module, logger::Base.CoreLogging.SimpleLogger)
@ AbstractMCMC ~/.julia/packages/AbstractMCMC/fnRmh/src/logging.jl:36
[18] macro expansion
@ ~/.julia/packages/AbstractMCMC/fnRmh/src/logging.jl:11 [inlined]
[19] mcmcsample(rng::Random._GLOBAL_RNG, model::DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, sampler::DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, N::Int64; progress::Bool, progressname::String, callback::Nothing, discard_initial::Int64, thinning::Int64, chain_type::Type, kwargs::Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:nadapts,), Tuple{Int64}}})
@ AbstractMCMC ~/.julia/packages/AbstractMCMC/fnRmh/src/sample.jl:111
[20] sample(rng::Random._GLOBAL_RNG, model::DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, sampler::DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, N::Int64; chain_type::Type, resume_from::Nothing, progress::Bool, nadapts::Int64, discard_adapt::Bool, discard_initial::Int64, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ Turing.Inference ~/.julia/packages/Turing/GxgQ1/src/inference/hmc.jl:133
[21] #sample#2
@ ~/.julia/packages/Turing/GxgQ1/src/inference/Inference.jl:145 [inlined]
[22] #sample#1
@ ~/.julia/packages/Turing/GxgQ1/src/inference/Inference.jl:135 [inlined]
[23] top-level scope
@ In[98]:1
[24] eval
@ ./boot.jl:373 [inlined]
[25] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
@ Base ./loading.jl:1196
@sreedta8
TuringGLM.jl
currently only supports single random-intercept models, i.e.(1 | grouping_variable)
.
@sreedta8, TuringGLM.jl
currently does not support random-slope models, i.e. (somevar | grouping_var)
. Just one single random-intercept, i.e. (1 | grouping_var)
.
@storopoli I'm getting another error - see the model and the error:
jtgmod2 = @formula(nrx ~ calls80 + (1 | mdmidc))
model2 = turing_model(jtgmod2, hcpdata);
chn = sample(model2, NUTS(1_000, 0.98), 4_000, progress=true);
*See the error below*
TrackedArrays do not support setindex!
Stacktrace:
[1] error(s::String)
@ Base ./error.jl:33
[2] setindex!(::ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, ::ReverseDiff.TrackedReal{Float64, Float64, Nothing}, ::Int64)
@ ReverseDiff ~/.julia/packages/ReverseDiff/5MMPp/src/tracked.jl:378
[3] macro expansion
@ ./broadcast.jl:961 [inlined]
[4] macro expansion
@ ./simdloop.jl:77 [inlined]
[5] copyto!
@ ./broadcast.jl:960 [inlined]
[6] copyto!
@ ./broadcast.jl:913 [inlined]
[7] materialize!
@ ./broadcast.jl:871 [inlined]
[8] materialize!(dest::ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, bc::Base.Broadcast.Broadcasted{ReverseDiff.TrackedStyle, Nothing, typeof(+), Tuple{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}}})
@ Base.Broadcast ./broadcast.jl:868
[9] (::TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}})(__model__::DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, __varinfo__::DynamicPPL.TypedVarInfo{NamedTuple{(:α, :β, :σ, :τ, :zⱼ), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:α, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, TDist{Float64}}}, Vector{AbstractPPL.VarName{:α, Setfield.IdentityLens}}, ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:β, Setfield.IdentityLens}, Int64}, Vector{Product{Continuous, TDist{Float64}, FillArrays.Fill{TDist{Float64}, 1, Tuple{Base.OneTo{Int64}}}}}, Vector{AbstractPPL.VarName{:β, Setfield.IdentityLens}}, ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Setfield.IdentityLens}, Int64}, Vector{Exponential{Float64}}, Vector{AbstractPPL.VarName{:σ, Setfield.IdentityLens}}, ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:τ, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, Truncated{TDist{Float64}, Continuous, Float64}}}, Vector{AbstractPPL.VarName{:τ, Setfield.IdentityLens}}, ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}, Int64}, Vector{DistributionsAD.TuringScalMvNormal{Vector{Float64}, Float64}}, Vector{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}}, ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, Vector{Set{DynamicPPL.Selector}}}}}, ReverseDiff.TrackedReal{Float64, Float64, ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}}}, __context__::DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, Random._GLOBAL_RNG}, y::Vector{Float64}, X::Matrix{Float64}, predictors::Int64, idxs::Vector{Int64}, n_gr::Int64, intercept_ranef::Vector{String}, μ_X::Int64, σ_X::Int64, prior::CustomPrior, residual::Float64)
@ TuringGLM ~/.julia/packages/TuringGLM/s2Pou/src/turing_model.jl:186
[10] macro expansion
@ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:493 [inlined]
[11] _evaluate!!
@ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:476 [inlined]
[12] evaluate_threadunsafe!!
@ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:451 [inlined]
[13] evaluate!!
@ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:404 [inlined]
[14] evaluate!!
@ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:415 [inlined]
[15] evaluate!!
@ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:423 [inlined]
[16] (::Turing.LogDensityFunction{DynamicPPL.TypedVarInfo{NamedTuple{(:α, :β, :σ, :τ, :zⱼ), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:α, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, TDist{Float64}}}, Vector{AbstractPPL.VarName{:α, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:β, Setfield.IdentityLens}, Int64}, Vector{Product{Continuous, TDist{Float64}, FillArrays.Fill{TDist{Float64}, 1, Tuple{Base.OneTo{Int64}}}}}, Vector{AbstractPPL.VarName{:β, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Setfield.IdentityLens}, Int64}, Vector{Exponential{Float64}}, Vector{AbstractPPL.VarName{:σ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:τ, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, Truncated{TDist{Float64}, Continuous, Float64}}}, Vector{AbstractPPL.VarName{:τ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}, Int64}, Vector{DistributionsAD.TuringScalMvNormal{Vector{Float64}, Float64}}, Vector{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext})(θ::ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}})
@ Turing ~/.julia/packages/Turing/GxgQ1/src/Turing.jl:37
[17] GradientTape
@ ~/.julia/packages/ReverseDiff/5MMPp/src/api/tape.jl:199 [inlined]
[18] ReverseDiff.GradientTape(f::Turing.LogDensityFunction{DynamicPPL.TypedVarInfo{NamedTuple{(:α, :β, :σ, :τ, :zⱼ), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:α, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, TDist{Float64}}}, Vector{AbstractPPL.VarName{:α, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:β, Setfield.IdentityLens}, Int64}, Vector{Product{Continuous, TDist{Float64}, FillArrays.Fill{TDist{Float64}, 1, Tuple{Base.OneTo{Int64}}}}}, Vector{AbstractPPL.VarName{:β, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Setfield.IdentityLens}, Int64}, Vector{Exponential{Float64}}, Vector{AbstractPPL.VarName{:σ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:τ, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, Truncated{TDist{Float64}, Continuous, Float64}}}, Vector{AbstractPPL.VarName{:τ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}, Int64}, Vector{DistributionsAD.TuringScalMvNormal{Vector{Float64}, Float64}}, Vector{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext}, input::Vector{Float64})
@ ReverseDiff ~/.julia/packages/ReverseDiff/5MMPp/src/api/tape.jl:198
[19] compiledtape
@ ~/.julia/packages/Turing/GxgQ1/src/essential/compat/reversediff.jl:79 [inlined]
[20] macro expansion
@ ~/.julia/packages/Turing/GxgQ1/src/essential/compat/reversediff.jl:77 [inlined]
[21] (::Turing.Essential.var"##getter#577#25"{Turing.Essential.RDTapeKey{Turing.LogDensityFunction{DynamicPPL.TypedVarInfo{NamedTuple{(:α, :β, :σ, :τ, :zⱼ), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:α, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, TDist{Float64}}}, Vector{AbstractPPL.VarName{:α, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:β, Setfield.IdentityLens}, Int64}, Vector{Product{Continuous, TDist{Float64}, FillArrays.Fill{TDist{Float64}, 1, Tuple{Base.OneTo{Int64}}}}}, Vector{AbstractPPL.VarName{:β, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Setfield.IdentityLens}, Int64}, Vector{Exponential{Float64}}, Vector{AbstractPPL.VarName{:σ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:τ, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, Truncated{TDist{Float64}, Continuous, Float64}}}, Vector{AbstractPPL.VarName{:τ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}, Int64}, Vector{DistributionsAD.TuringScalMvNormal{Vector{Float64}, Float64}}, Vector{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext}, Vector{Float64}}})()
@ Turing.Essential ~/.julia/packages/Memoization/ut5GT/src/Memoization.jl:163
[22] get!(default::Turing.Essential.var"##getter#577#25"{Turing.Essential.RDTapeKey{Turing.LogDensityFunction{DynamicPPL.TypedVarInfo{NamedTuple{(:α, :β, :σ, :τ, :zⱼ), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:α, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, TDist{Float64}}}, Vector{AbstractPPL.VarName{:α, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:β, Setfield.IdentityLens}, Int64}, Vector{Product{Continuous, TDist{Float64}, FillArrays.Fill{TDist{Float64}, 1, Tuple{Base.OneTo{Int64}}}}}, Vector{AbstractPPL.VarName{:β, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Setfield.IdentityLens}, Int64}, Vector{Exponential{Float64}}, Vector{AbstractPPL.VarName{:σ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:τ, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, Truncated{TDist{Float64}, Continuous, Float64}}}, Vector{AbstractPPL.VarName{:τ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}, Int64}, Vector{DistributionsAD.TuringScalMvNormal{Vector{Float64}, Float64}}, Vector{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext}, Vector{Float64}}}, h::Dict{Any, Any}, key::Tuple{Turing.LogDensityFunction{DynamicPPL.TypedVarInfo{NamedTuple{(:α, :β, :σ, :τ, :zⱼ), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:α, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, TDist{Float64}}}, Vector{AbstractPPL.VarName{:α, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:β, Setfield.IdentityLens}, Int64}, Vector{Product{Continuous, TDist{Float64}, FillArrays.Fill{TDist{Float64}, 1, Tuple{Base.OneTo{Int64}}}}}, Vector{AbstractPPL.VarName{:β, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Setfield.IdentityLens}, Int64}, Vector{Exponential{Float64}}, Vector{AbstractPPL.VarName{:σ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:τ, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, Truncated{TDist{Float64}, Continuous, Float64}}}, Vector{AbstractPPL.VarName{:τ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}, Int64}, Vector{DistributionsAD.TuringScalMvNormal{Vector{Float64}, Float64}}, Vector{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext}, DataType, Tuple{Int64}, Int64})
@ Base ./dict.jl:464
[23] _get!
@ ~/.julia/packages/Memoization/ut5GT/src/Memoization.jl:170 [inlined]
[24] _get!
@ ~/.julia/packages/Turing/GxgQ1/src/essential/compat/reversediff.jl:73 [inlined]
[25] memoized_taperesult
@ ~/.julia/packages/Memoization/ut5GT/src/Memoization.jl:165 [inlined]
[26] memoized_taperesult
@ ~/.julia/packages/Turing/GxgQ1/src/essential/compat/reversediff.jl:75 [inlined]
[27] gradient_logp(backend::Turing.Essential.ReverseDiffAD{true}, θ::Vector{Float64}, vi::DynamicPPL.TypedVarInfo{NamedTuple{(:α, :β, :σ, :τ, :zⱼ), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:α, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, TDist{Float64}}}, Vector{AbstractPPL.VarName{:α, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:β, Setfield.IdentityLens}, Int64}, Vector{Product{Continuous, TDist{Float64}, FillArrays.Fill{TDist{Float64}, 1, Tuple{Base.OneTo{Int64}}}}}, Vector{AbstractPPL.VarName{:β, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Setfield.IdentityLens}, Int64}, Vector{Exponential{Float64}}, Vector{AbstractPPL.VarName{:σ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:τ, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, Truncated{TDist{Float64}, Continuous, Float64}}}, Vector{AbstractPPL.VarName{:τ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}, Int64}, Vector{DistributionsAD.TuringScalMvNormal{Vector{Float64}, Float64}}, Vector{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, model::DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, sampler::DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, context::DynamicPPL.DefaultContext)
@ Turing.Essential ~/.julia/packages/Turing/GxgQ1/src/essential/compat/reversediff.jl:58
[28] gradient_logp (repeats 2 times)
@ ~/.julia/packages/Turing/GxgQ1/src/essential/ad.jl:88 [inlined]
[29] ∂logπ∂θ
@ ~/.julia/packages/Turing/GxgQ1/src/inference/hmc.jl:433 [inlined]
[30] ∂H∂θ
@ ~/.julia/packages/AdvancedHMC/51xgc/src/hamiltonian.jl:31 [inlined]
[31] phasepoint
@ ~/.julia/packages/AdvancedHMC/51xgc/src/hamiltonian.jl:76 [inlined]
[32] phasepoint(rng::Random._GLOBAL_RNG, θ::Vector{Float64}, h::AdvancedHMC.Hamiltonian{AdvancedHMC.DiagEuclideanMetric{Float64, Vector{Float64}}, Turing.LogDensityFunction{DynamicPPL.TypedVarInfo{NamedTuple{(:α, :β, :σ, :τ, :zⱼ), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:α, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, TDist{Float64}}}, Vector{AbstractPPL.VarName{:α, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:β, Setfield.IdentityLens}, Int64}, Vector{Product{Continuous, TDist{Float64}, FillArrays.Fill{TDist{Float64}, 1, Tuple{Base.OneTo{Int64}}}}}, Vector{AbstractPPL.VarName{:β, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Setfield.IdentityLens}, Int64}, Vector{Exponential{Float64}}, Vector{AbstractPPL.VarName{:σ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:τ, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, Truncated{TDist{Float64}, Continuous, Float64}}}, Vector{AbstractPPL.VarName{:τ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}, Int64}, Vector{DistributionsAD.TuringScalMvNormal{Vector{Float64}, Float64}}, Vector{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext}, Turing.Inference.var"#∂logπ∂θ#53"{DynamicPPL.TypedVarInfo{NamedTuple{(:α, :β, :σ, :τ, :zⱼ), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:α, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, TDist{Float64}}}, Vector{AbstractPPL.VarName{:α, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:β, Setfield.IdentityLens}, Int64}, Vector{Product{Continuous, TDist{Float64}, FillArrays.Fill{TDist{Float64}, 1, Tuple{Base.OneTo{Int64}}}}}, Vector{AbstractPPL.VarName{:β, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Setfield.IdentityLens}, Int64}, Vector{Exponential{Float64}}, Vector{AbstractPPL.VarName{:σ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:τ, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, Truncated{TDist{Float64}, Continuous, Float64}}}, Vector{AbstractPPL.VarName{:τ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}, Int64}, Vector{DistributionsAD.TuringScalMvNormal{Vector{Float64}, Float64}}, Vector{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}}})
@ AdvancedHMC ~/.julia/packages/AdvancedHMC/51xgc/src/hamiltonian.jl:153
[33] initialstep(rng::Random._GLOBAL_RNG, model::DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, spl::DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, vi::DynamicPPL.TypedVarInfo{NamedTuple{(:α, :β, :σ, :τ, :zⱼ), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:α, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, TDist{Float64}}}, Vector{AbstractPPL.VarName{:α, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:β, Setfield.IdentityLens}, Int64}, Vector{Product{Continuous, TDist{Float64}, FillArrays.Fill{TDist{Float64}, 1, Tuple{Base.OneTo{Int64}}}}}, Vector{AbstractPPL.VarName{:β, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Setfield.IdentityLens}, Int64}, Vector{Exponential{Float64}}, Vector{AbstractPPL.VarName{:σ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:τ, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, Truncated{TDist{Float64}, Continuous, Float64}}}, Vector{AbstractPPL.VarName{:τ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}, Int64}, Vector{DistributionsAD.TuringScalMvNormal{Vector{Float64}, Float64}}, Vector{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}; init_params::Nothing, nadapts::Int64, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ Turing.Inference ~/.julia/packages/Turing/GxgQ1/src/inference/hmc.jl:167
[34] step(rng::Random._GLOBAL_RNG, model::DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, spl::DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}; resume_from::Nothing, init_params::Nothing, kwargs::Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:nadapts,), Tuple{Int64}}})
@ DynamicPPL ~/.julia/packages/DynamicPPL/R7VK9/src/sampler.jl:104
[35] macro expansion
@ ~/.julia/packages/AbstractMCMC/fnRmh/src/sample.jl:120 [inlined]
[36] macro expansion
@ ~/.julia/packages/ProgressLogging/6KXlp/src/ProgressLogging.jl:328 [inlined]
[37] (::AbstractMCMC.var"#21#22"{Bool, String, Nothing, Int64, Int64, Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:nadapts,), Tuple{Int64}}}, Random._GLOBAL_RNG, DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, Int64, Int64})()
@ AbstractMCMC ~/.julia/packages/AbstractMCMC/fnRmh/src/logging.jl:12
[38] with_logstate(f::Function, logstate::Any)
@ Base.CoreLogging ./logging.jl:511
[39] with_logger(f::Function, logger::LoggingExtras.TeeLogger{Tuple{LoggingExtras.EarlyFilteredLogger{ConsoleProgressMonitor.ProgressLogger, AbstractMCMC.var"#1#3"{Module}}, LoggingExtras.EarlyFilteredLogger{Base.CoreLogging.SimpleLogger, AbstractMCMC.var"#2#4"{Module}}}})
@ Base.CoreLogging ./logging.jl:623
[40] with_progresslogger(f::Function, _module::Module, logger::Base.CoreLogging.SimpleLogger)
@ AbstractMCMC ~/.julia/packages/AbstractMCMC/fnRmh/src/logging.jl:36
[41] macro expansion
@ ~/.julia/packages/AbstractMCMC/fnRmh/src/logging.jl:11 [inlined]
[42] mcmcsample(rng::Random._GLOBAL_RNG, model::DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, sampler::DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, N::Int64; progress::Bool, progressname::String, callback::Nothing, discard_initial::Int64, thinning::Int64, chain_type::Type, kwargs::Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:nadapts,), Tuple{Int64}}})
@ AbstractMCMC ~/.julia/packages/AbstractMCMC/fnRmh/src/sample.jl:111
[43] sample(rng::Random._GLOBAL_RNG, model::DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, sampler::DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, N::Int64; chain_type::Type, resume_from::Nothing, progress::Bool, nadapts::Int64, discard_adapt::Bool, discard_initial::Int64, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ Turing.Inference ~/.julia/packages/Turing/GxgQ1/src/inference/hmc.jl:133
[44] #sample#2
@ ~/.julia/packages/Turing/GxgQ1/src/inference/Inference.jl:145 [inlined]
[45] #sample#1
@ ~/.julia/packages/Turing/GxgQ1/src/inference/Inference.jl:135 [inlined]
[46] top-level scope
@ In[123]:1
[47] eval
@ ./boot.jl:373 [inlined]
[48] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
@ Base ./loading.jl:1196
Please post the eltype
of the columns inside the formula, i.e :nrx
, :calls80
and :mdmidc
. I think :mdmidc
should be a String
or CategoricalArray
.
@storopoli mdmidc
variable is indeed a string
type. See below:
typeof("mdmidc")
String
nrx is Float64 as is calls80
I did not change in the model or the nature of the variables in the dataset. The error just came out of nowhere. The only difference was that I had installed ReverseDiff and Memoization.
I have updated my project so everything is up to date.
Oh you are using a reverse-mode differentiation?
Try the model with the default AD backend :forwarddiff
.
@storopoli thanks for the tip. Here is the code I'm testing
jtgmod2 = @formula(nrx ~ calls80 + (1 | mdmidc))
model2 = turing_model(jtgmod2, hcpdata);
chn = sample(model2, NUTS(1_000, 0.65), 4_000, progress=true);
This model is taking longer than when I used the ReverseDiff to identify the step size. Why is ForwardDiff so much slower than ReverseDiff?
Update1:
It has been 20 minutes since I ran chn = ....
command and still the step size has not been identified. By this time sampling using ReverseDiff was already completed. Something is not efficient w.r.t ForwardDIff when compared to ReverseDiff
Update2:
┌ Info: Found initial step size
│ ϵ = 0.00078125
└ @ Turing.Inference /home/sreedta/.julia/packages/Turing/GxgQ1/src/inference/hmc.jl:188
Sampling: 6%|██▎ | ETA: 12:55:01
Update3:
┌ Info: Found initial step size
│ ϵ = 0.00078125
└ @ Turing.Inference /home/sreedta/.julia/packages/Turing/GxgQ1/src/inference/hmc.jl:188
Sampling: 18%|███████▋ | ETA: 9:21:09m
Final Update (TurlingGLM with ForwardDiff took 7 hours to sample - Can TuringGLM be setup to work with ReverseDiff
?
┌ Info: Found initial step size
│ ϵ = 0.00078125
└ @ Turing.Inference /home/sreedta/.julia/packages/Turing/GxgQ1/src/inference/hmc.jl:188
Sampling: 100%|█████████████████████████████████████████| Time: 7:00:22m
Because of the dimensionality of the outputs versus the inputs. Reversemode works best when the dimensionality of the inputs is greater than the outputs (that's why is the default one in neural nets).
Why should the results look so different between Turing.jl with RD vs TuringGLM with FD? Isn't the underlying model in both Turing and TuringGLM the same?
@storopoli Please below
Final Update (TurlingGLM with ForwardDiff took 7 hours to sample - Can TuringGLM be setup to work with ReverseDiff?
┌ Info: Found initial step size
│ ϵ = 0.00078125
└ @ Turing.Inference /home/sreedta/.julia/packages/Turing/GxgQ1/src/inference/hmc.jl:188
Sampling: 100%|█████████████████████████████████████████| Time: 7:00:22m
It is how the model was coded. TuringGLM does nothing under the hood. It simply parses your formula and returns a Turing model. No fancy shenanigans. It "cannot support anything" because it returns a Turing model.
TuringGLM is made to get people into Turing. It fulfilled its purpose marvelously with you :). Turing is not that hard when compared to Stan or PyMC.
@storopoli If TuringGLM "simply parses your formula and returns a Turing model." then should n't a TuringGLM model work with ReverseDIff
? This is the part I'm struggling to understand how I can test a model with Turing and RD (ReverseDiff), but the same will not work with TuringGLM and RD. If you can elaborate on that that would be great. As an end-user, I would like to see TuringGLM work with ReverseDiff
I agree that having TuringGLM is very very helpful and I agree that from a specification pov, Turing is as straight forward as Stan and PymC. It is only a matter of familiarity with Julia and Turing. I love the fact that Turing sampling with ReverseDiff
is faster than both Stan and PyMC.
@storopoli I get the following error when I combine TuringGLM with ReverseDIff. See below:
Model tested:
jtgmod2 = @formula(nrx ~ calls80 + (1 | mdmidc))
model2 = turing_model(jtgmod2, hcpdata);
chn = sample(model2, NUTS(), 2_000, progress=true);
Error: (my question is: why can't TuringGLM and ReverseDiff work together? This error goes away if I use ForwardDiff, but it takes 8 hours to sample - not ideal at all)
TrackedArrays do not support setindex!
Stacktrace:
[1] error(s::String)
@ Base ./error.jl:33
[2] setindex!(::ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, ::ReverseDiff.TrackedReal{Float64, Float64, Nothing}, ::Int64)
@ ReverseDiff ~/.julia/packages/ReverseDiff/5MMPp/src/tracked.jl:378
[3] macro expansion
@ ./broadcast.jl:961 [inlined]
[4] macro expansion
@ ./simdloop.jl:77 [inlined]
[5] copyto!
@ ./broadcast.jl:960 [inlined]
[6] copyto!
@ ./broadcast.jl:913 [inlined]
[7] materialize!
@ ./broadcast.jl:871 [inlined]
[8] materialize!(dest::ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, bc::Base.Broadcast.Broadcasted{ReverseDiff.TrackedStyle, Nothing, typeof(+), Tuple{ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}}})
@ Base.Broadcast ./broadcast.jl:868
[9] (::TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}})(__model__::DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, __varinfo__::DynamicPPL.TypedVarInfo{NamedTuple{(:α, :β, :σ, :τ, :zⱼ), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:α, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, TDist{Float64}}}, Vector{AbstractPPL.VarName{:α, Setfield.IdentityLens}}, ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:β, Setfield.IdentityLens}, Int64}, Vector{Product{Continuous, TDist{Float64}, FillArrays.Fill{TDist{Float64}, 1, Tuple{Base.OneTo{Int64}}}}}, Vector{AbstractPPL.VarName{:β, Setfield.IdentityLens}}, ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Setfield.IdentityLens}, Int64}, Vector{Exponential{Float64}}, Vector{AbstractPPL.VarName{:σ, Setfield.IdentityLens}}, ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:τ, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, Truncated{TDist{Float64}, Continuous, Float64}}}, Vector{AbstractPPL.VarName{:τ, Setfield.IdentityLens}}, ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}, Int64}, Vector{DistributionsAD.TuringScalMvNormal{Vector{Float64}, Float64}}, Vector{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}}, ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}, Vector{Set{DynamicPPL.Selector}}}}}, ReverseDiff.TrackedReal{Float64, Float64, ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}}}}, __context__::DynamicPPL.SamplingContext{DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext, Random._GLOBAL_RNG}, y::Vector{Float64}, X::Matrix{Float64}, predictors::Int64, idxs::Vector{Int64}, n_gr::Int64, intercept_ranef::Vector{String}, μ_X::Int64, σ_X::Int64, prior::CustomPrior, residual::Float64)
@ TuringGLM ~/.julia/packages/TuringGLM/s2Pou/src/turing_model.jl:186
[10] macro expansion
@ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:493 [inlined]
[11] _evaluate!!
@ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:476 [inlined]
[12] evaluate_threadunsafe!!
@ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:451 [inlined]
[13] evaluate!!
@ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:404 [inlined]
[14] evaluate!!
@ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:415 [inlined]
[15] evaluate!!
@ ~/.julia/packages/DynamicPPL/R7VK9/src/model.jl:423 [inlined]
[16] (::Turing.LogDensityFunction{DynamicPPL.TypedVarInfo{NamedTuple{(:α, :β, :σ, :τ, :zⱼ), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:α, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, TDist{Float64}}}, Vector{AbstractPPL.VarName{:α, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:β, Setfield.IdentityLens}, Int64}, Vector{Product{Continuous, TDist{Float64}, FillArrays.Fill{TDist{Float64}, 1, Tuple{Base.OneTo{Int64}}}}}, Vector{AbstractPPL.VarName{:β, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Setfield.IdentityLens}, Int64}, Vector{Exponential{Float64}}, Vector{AbstractPPL.VarName{:σ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:τ, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, Truncated{TDist{Float64}, Continuous, Float64}}}, Vector{AbstractPPL.VarName{:τ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}, Int64}, Vector{DistributionsAD.TuringScalMvNormal{Vector{Float64}, Float64}}, Vector{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext})(θ::ReverseDiff.TrackedArray{Float64, Float64, 1, Vector{Float64}, Vector{Float64}})
@ Turing ~/.julia/packages/Turing/GxgQ1/src/Turing.jl:37
[17] GradientTape
@ ~/.julia/packages/ReverseDiff/5MMPp/src/api/tape.jl:199 [inlined]
[18] ReverseDiff.GradientTape(f::Turing.LogDensityFunction{DynamicPPL.TypedVarInfo{NamedTuple{(:α, :β, :σ, :τ, :zⱼ), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:α, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, TDist{Float64}}}, Vector{AbstractPPL.VarName{:α, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:β, Setfield.IdentityLens}, Int64}, Vector{Product{Continuous, TDist{Float64}, FillArrays.Fill{TDist{Float64}, 1, Tuple{Base.OneTo{Int64}}}}}, Vector{AbstractPPL.VarName{:β, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Setfield.IdentityLens}, Int64}, Vector{Exponential{Float64}}, Vector{AbstractPPL.VarName{:σ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:τ, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, Truncated{TDist{Float64}, Continuous, Float64}}}, Vector{AbstractPPL.VarName{:τ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}, Int64}, Vector{DistributionsAD.TuringScalMvNormal{Vector{Float64}, Float64}}, Vector{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext}, input::Vector{Float64})
@ ReverseDiff ~/.julia/packages/ReverseDiff/5MMPp/src/api/tape.jl:198
[19] compiledtape
@ ~/.julia/packages/Turing/GxgQ1/src/essential/compat/reversediff.jl:79 [inlined]
[20] macro expansion
@ ~/.julia/packages/Turing/GxgQ1/src/essential/compat/reversediff.jl:77 [inlined]
[21] (::Turing.Essential.var"##getter#463#25"{Turing.Essential.RDTapeKey{Turing.LogDensityFunction{DynamicPPL.TypedVarInfo{NamedTuple{(:α, :β, :σ, :τ, :zⱼ), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:α, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, TDist{Float64}}}, Vector{AbstractPPL.VarName{:α, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:β, Setfield.IdentityLens}, Int64}, Vector{Product{Continuous, TDist{Float64}, FillArrays.Fill{TDist{Float64}, 1, Tuple{Base.OneTo{Int64}}}}}, Vector{AbstractPPL.VarName{:β, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Setfield.IdentityLens}, Int64}, Vector{Exponential{Float64}}, Vector{AbstractPPL.VarName{:σ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:τ, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, Truncated{TDist{Float64}, Continuous, Float64}}}, Vector{AbstractPPL.VarName{:τ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}, Int64}, Vector{DistributionsAD.TuringScalMvNormal{Vector{Float64}, Float64}}, Vector{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext}, Vector{Float64}}})()
@ Turing.Essential ~/.julia/packages/Memoization/ut5GT/src/Memoization.jl:163
[22] get!(default::Turing.Essential.var"##getter#463#25"{Turing.Essential.RDTapeKey{Turing.LogDensityFunction{DynamicPPL.TypedVarInfo{NamedTuple{(:α, :β, :σ, :τ, :zⱼ), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:α, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, TDist{Float64}}}, Vector{AbstractPPL.VarName{:α, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:β, Setfield.IdentityLens}, Int64}, Vector{Product{Continuous, TDist{Float64}, FillArrays.Fill{TDist{Float64}, 1, Tuple{Base.OneTo{Int64}}}}}, Vector{AbstractPPL.VarName{:β, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Setfield.IdentityLens}, Int64}, Vector{Exponential{Float64}}, Vector{AbstractPPL.VarName{:σ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:τ, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, Truncated{TDist{Float64}, Continuous, Float64}}}, Vector{AbstractPPL.VarName{:τ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}, Int64}, Vector{DistributionsAD.TuringScalMvNormal{Vector{Float64}, Float64}}, Vector{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext}, Vector{Float64}}}, h::Dict{Any, Any}, key::Tuple{Turing.LogDensityFunction{DynamicPPL.TypedVarInfo{NamedTuple{(:α, :β, :σ, :τ, :zⱼ), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:α, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, TDist{Float64}}}, Vector{AbstractPPL.VarName{:α, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:β, Setfield.IdentityLens}, Int64}, Vector{Product{Continuous, TDist{Float64}, FillArrays.Fill{TDist{Float64}, 1, Tuple{Base.OneTo{Int64}}}}}, Vector{AbstractPPL.VarName{:β, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Setfield.IdentityLens}, Int64}, Vector{Exponential{Float64}}, Vector{AbstractPPL.VarName{:σ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:τ, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, Truncated{TDist{Float64}, Continuous, Float64}}}, Vector{AbstractPPL.VarName{:τ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}, Int64}, Vector{DistributionsAD.TuringScalMvNormal{Vector{Float64}, Float64}}, Vector{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext}, DataType, Tuple{Int64}, Int64})
@ Base ./dict.jl:464
[23] _get!
@ ~/.julia/packages/Memoization/ut5GT/src/Memoization.jl:170 [inlined]
[24] _get!
@ ~/.julia/packages/Turing/GxgQ1/src/essential/compat/reversediff.jl:73 [inlined]
[25] memoized_taperesult
@ ~/.julia/packages/Memoization/ut5GT/src/Memoization.jl:165 [inlined]
[26] memoized_taperesult
@ ~/.julia/packages/Turing/GxgQ1/src/essential/compat/reversediff.jl:75 [inlined]
[27] gradient_logp(backend::Turing.Essential.ReverseDiffAD{true}, θ::Vector{Float64}, vi::DynamicPPL.TypedVarInfo{NamedTuple{(:α, :β, :σ, :τ, :zⱼ), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:α, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, TDist{Float64}}}, Vector{AbstractPPL.VarName{:α, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:β, Setfield.IdentityLens}, Int64}, Vector{Product{Continuous, TDist{Float64}, FillArrays.Fill{TDist{Float64}, 1, Tuple{Base.OneTo{Int64}}}}}, Vector{AbstractPPL.VarName{:β, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Setfield.IdentityLens}, Int64}, Vector{Exponential{Float64}}, Vector{AbstractPPL.VarName{:σ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:τ, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, Truncated{TDist{Float64}, Continuous, Float64}}}, Vector{AbstractPPL.VarName{:τ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}, Int64}, Vector{DistributionsAD.TuringScalMvNormal{Vector{Float64}, Float64}}, Vector{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, model::DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, sampler::DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, context::DynamicPPL.DefaultContext)
@ Turing.Essential ~/.julia/packages/Turing/GxgQ1/src/essential/compat/reversediff.jl:58
[28] gradient_logp (repeats 2 times)
@ ~/.julia/packages/Turing/GxgQ1/src/essential/ad.jl:88 [inlined]
[29] ∂logπ∂θ
@ ~/.julia/packages/Turing/GxgQ1/src/inference/hmc.jl:433 [inlined]
[30] ∂H∂θ
@ ~/.julia/packages/AdvancedHMC/51xgc/src/hamiltonian.jl:31 [inlined]
[31] phasepoint
@ ~/.julia/packages/AdvancedHMC/51xgc/src/hamiltonian.jl:76 [inlined]
[32] phasepoint(rng::Random._GLOBAL_RNG, θ::Vector{Float64}, h::AdvancedHMC.Hamiltonian{AdvancedHMC.DiagEuclideanMetric{Float64, Vector{Float64}}, Turing.LogDensityFunction{DynamicPPL.TypedVarInfo{NamedTuple{(:α, :β, :σ, :τ, :zⱼ), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:α, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, TDist{Float64}}}, Vector{AbstractPPL.VarName{:α, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:β, Setfield.IdentityLens}, Int64}, Vector{Product{Continuous, TDist{Float64}, FillArrays.Fill{TDist{Float64}, 1, Tuple{Base.OneTo{Int64}}}}}, Vector{AbstractPPL.VarName{:β, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Setfield.IdentityLens}, Int64}, Vector{Exponential{Float64}}, Vector{AbstractPPL.VarName{:σ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:τ, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, Truncated{TDist{Float64}, Continuous, Float64}}}, Vector{AbstractPPL.VarName{:τ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}, Int64}, Vector{DistributionsAD.TuringScalMvNormal{Vector{Float64}, Float64}}, Vector{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.DefaultContext}, Turing.Inference.var"#∂logπ∂θ#53"{DynamicPPL.TypedVarInfo{NamedTuple{(:α, :β, :σ, :τ, :zⱼ), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:α, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, TDist{Float64}}}, Vector{AbstractPPL.VarName{:α, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:β, Setfield.IdentityLens}, Int64}, Vector{Product{Continuous, TDist{Float64}, FillArrays.Fill{TDist{Float64}, 1, Tuple{Base.OneTo{Int64}}}}}, Vector{AbstractPPL.VarName{:β, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Setfield.IdentityLens}, Int64}, Vector{Exponential{Float64}}, Vector{AbstractPPL.VarName{:σ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:τ, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, Truncated{TDist{Float64}, Continuous, Float64}}}, Vector{AbstractPPL.VarName{:τ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}, Int64}, Vector{DistributionsAD.TuringScalMvNormal{Vector{Float64}, Float64}}, Vector{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}, DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}}})
@ AdvancedHMC ~/.julia/packages/AdvancedHMC/51xgc/src/hamiltonian.jl:153
[33] initialstep(rng::Random._GLOBAL_RNG, model::DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, spl::DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, vi::DynamicPPL.TypedVarInfo{NamedTuple{(:α, :β, :σ, :τ, :zⱼ), Tuple{DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:α, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, TDist{Float64}}}, Vector{AbstractPPL.VarName{:α, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:β, Setfield.IdentityLens}, Int64}, Vector{Product{Continuous, TDist{Float64}, FillArrays.Fill{TDist{Float64}, 1, Tuple{Base.OneTo{Int64}}}}}, Vector{AbstractPPL.VarName{:β, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:σ, Setfield.IdentityLens}, Int64}, Vector{Exponential{Float64}}, Vector{AbstractPPL.VarName{:σ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:τ, Setfield.IdentityLens}, Int64}, Vector{LocationScale{Float64, Continuous, Truncated{TDist{Float64}, Continuous, Float64}}}, Vector{AbstractPPL.VarName{:τ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}, DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}, Int64}, Vector{DistributionsAD.TuringScalMvNormal{Vector{Float64}, Float64}}, Vector{AbstractPPL.VarName{:zⱼ, Setfield.IdentityLens}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}}, Float64}; init_params::Nothing, nadapts::Int64, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ Turing.Inference ~/.julia/packages/Turing/GxgQ1/src/inference/hmc.jl:167
[34] step(rng::Random._GLOBAL_RNG, model::DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, spl::DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}; resume_from::Nothing, init_params::Nothing, kwargs::Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:nadapts,), Tuple{Int64}}})
@ DynamicPPL ~/.julia/packages/DynamicPPL/R7VK9/src/sampler.jl:104
[35] macro expansion
@ ~/.julia/packages/AbstractMCMC/fnRmh/src/sample.jl:120 [inlined]
[36] macro expansion
@ ~/.julia/packages/ProgressLogging/6KXlp/src/ProgressLogging.jl:328 [inlined]
[37] (::AbstractMCMC.var"#21#22"{Bool, String, Nothing, Int64, Int64, Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:nadapts,), Tuple{Int64}}}, Random._GLOBAL_RNG, DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, Int64, Int64})()
@ AbstractMCMC ~/.julia/packages/AbstractMCMC/fnRmh/src/logging.jl:12
[38] with_logstate(f::Function, logstate::Any)
@ Base.CoreLogging ./logging.jl:511
[39] with_logger(f::Function, logger::LoggingExtras.TeeLogger{Tuple{LoggingExtras.EarlyFilteredLogger{ConsoleProgressMonitor.ProgressLogger, AbstractMCMC.var"#1#3"{Module}}, LoggingExtras.EarlyFilteredLogger{Base.CoreLogging.SimpleLogger, AbstractMCMC.var"#2#4"{Module}}}})
@ Base.CoreLogging ./logging.jl:623
[40] with_progresslogger(f::Function, _module::Module, logger::Base.CoreLogging.SimpleLogger)
@ AbstractMCMC ~/.julia/packages/AbstractMCMC/fnRmh/src/logging.jl:36
[41] macro expansion
@ ~/.julia/packages/AbstractMCMC/fnRmh/src/logging.jl:11 [inlined]
[42] mcmcsample(rng::Random._GLOBAL_RNG, model::DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, sampler::DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, N::Int64; progress::Bool, progressname::String, callback::Nothing, discard_initial::Int64, thinning::Int64, chain_type::Type, kwargs::Base.Pairs{Symbol, Int64, Tuple{Symbol}, NamedTuple{(:nadapts,), Tuple{Int64}}})
@ AbstractMCMC ~/.julia/packages/AbstractMCMC/fnRmh/src/sample.jl:111
[43] sample(rng::Random._GLOBAL_RNG, model::DynamicPPL.Model{TuringGLM.var"#normal_model_ranef#16"{Int64, Int64, CustomPrior, Vector{String}, Int64, Vector{Int64}}, (:y, :X, :predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (:predictors, :idxs, :n_gr, :intercept_ranef, :μ_X, :σ_X, :prior, :residual), (), Tuple{Vector{Float64}, Matrix{Float64}, Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, Tuple{Int64, Vector{Int64}, Int64, Vector{String}, Int64, Int64, CustomPrior, Float64}, DynamicPPL.DefaultContext}, sampler::DynamicPPL.Sampler{NUTS{Turing.Essential.ReverseDiffAD{true}, (), AdvancedHMC.DiagEuclideanMetric}}, N::Int64; chain_type::Type, resume_from::Nothing, progress::Bool, nadapts::Int64, discard_adapt::Bool, discard_initial::Int64, kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ Turing.Inference ~/.julia/packages/Turing/GxgQ1/src/inference/hmc.jl:133
[44] #sample#2
@ ~/.julia/packages/Turing/GxgQ1/src/inference/Inference.jl:145 [inlined]
[45] #sample#1
@ ~/.julia/packages/Turing/GxgQ1/src/inference/Inference.jl:135 [inlined]
[46] top-level scope
@ In[23]:1
[47] eval
@ ./boot.jl:373 [inlined]
[48] include_string(mapexpr::typeof(REPL.softscope), mod::Module, code::String, filename::String)
@ Base ./loading.jl:1196
It is how the model was coded.
Check the source code for the creation of the Gaussian Likelihood (Linear Regression) model: https://github.com/TuringLang/TuringGLM.jl/blob/main/src/turing_model.jl#L162-L203
This might be the issue behind of the failure with reversediff AD.
@storopoli Is it possible to change the way Gauussian Likelihood is calculated to make it compatble with ReverseDiff
?
I understand that this could be substantial work, so I understand that request may not be feasible. If that is the case, I will work with Turing
since it gives me greater flexibility and power any way with the ability to compute random slopes instead of just random intercepts as TuringGLM
does.
Thanks again for your help and willing to engage in these detailed conversations with me.
Yes, please open an issue in the TuringGLM repository? I will try to implement it after JuliaCon.
@storopoli I will open an issue without fail. Please let me know if I can help in any way with testing the rewrite. I'm not a programmer but love to help in anyway I can.
Hi
I'm running a hierarchical model using
TuringGLM
. I had previously ran the same model usingbrms (rstan)
andbambi (pymc)
. This data set consists of 12 months of prescribing from 1000 physicians. The specific columns include the number of new Rx they have written for a drug, along with adstocks computed for each physician on six marketing variables.Model tested:
nrx ~ calls80 +samples10 + lle50 +spe80 + copay10 + fto10 + (calls80 | mdmidc)
6 population level fixed effects and 1 random effects for the group variablemdmidc
(at an individual HCP level random effects)Command used to sample is
chn = sample(model, NUTS(), 3_000, progress=true)
target acceptance
in the above command?adaptation steps
in the above command?Here are the specifications for
brms
andbambi
runs:Time to complete the sampling process: