Closed DominiqueMakowski closed 3 months ago
Hi guys -
Thanks this is really cool. Really happy to see the model in Julia!
With the exception that I haven't used Julia, it generally looks good to me. A few thoughts:
real induced_dirichlet_lpdf(vector c, vector alpha, real phi) {
int K = num_elements(c) + 1;
vector[K - 1] sigma = inv_logit(phi - c);
vector[K] p;
matrix[K, K] J = rep_matrix(0, K, K);
// Induced ordinal probabilities
p[1] = 1 - sigma[1];
for (k in 2:(K - 1))
p[k] = sigma[k - 1] - sigma[k];
p[K] = sigma[K - 1];
// Baseline column of Jacobian
for (k in 1:K) J[k, 1] = 1;
// Diagonal entries of Jacobian
for (k in 2:K) {
real rho = sigma[k - 1] * (1 - sigma[k - 1]);
J[k, k] = - rho;
J[k - 1, k] = rho;
}
return dirichlet_lpdf(p | alpha)
+ log_determinant(J);
}
ordbetareg
defines the ordered vector of cutpoints as the thresh
with a manual transformation (i.e., cut1 + exp(cut2)
because brms
does not allow custom families to access ordered vectors. So that might be the difference in how the cutpoints are being reported. Generally speaking, if turning can directly define an ordered vector for cutpoints, you should just do that and avoid the manual transformation. dordbeta
function in the ordbetareg
package as that is the log-likelihood in R. That will allow you to test that component directly and it should be identical up to some floating point/algorithmic difference. There's also a random number rordbeta
function that you can check compared to your RNG generator. If the log-likelihood is correct, than everything else should just be priors (or possibly a different parameterization of the beta distribution, etc). I hope that is helpful. I agree that the parameters should be pretty close, especially if you are sampling using MCMC/NUTS. The ordered beta distribution is relatively simple so it shouldn't vary a ton across estimators apart from differences in priors, so I would want to see it pretty close before releasing to the broader public.
Thanks a lot @saudiwin your comment was very helpful ☺️
I have now successfully (i think) implemented Ordered Beta (see its documentation) and validated it against the ordbetareg package and its pdf function.
The main difference with the R implementation is that:
Note that from my little tests it looks much faster in Julia as compared to Stan, which is quite surprising (but promising!)
I'm quite excited. Looking forward to using it for my research!
Thanks again and don't hesitate to let me know if there's any issues or improvements you can think of
Hi @saudiwin
I am trying to implement the ordbeta model in Julia, which requires the implementation of the logpdf of the distribution.
This required me to adapt the formula so that μ is set on the raw 0-1 scale (not the logit scale, as the logit parametrization is specified later in the model per se).
With the help of @kiante-fernandez and ChatGPT, we came up with this:
https://github.com/DominiqueMakowski/SubjectiveScalesModels.jl/blob/825f3fc089e64361e72e1a336003ea78320dc496/src/OrderedBeta.jl#L128-L142
While this seems to work, the parameter space for the cutpoints is a bit odd (though I'm not sure about that), see the figure in https://dominiquemakowski.github.io/SubjectiveScalesModels.jl/dev/OrderedBeta/#Function
Moreover, the estimates that I get for the model are slightly off for the cutpoints compared to the R
ordbetareg
implementation (though mu is recovered very nicely): https://dominiquemakowski.github.io/SubjectiveScalesModels.jl/dev/OrderedBeta/#Validation-against-Original-ImplementationCould you take a look at the logpdf formula above and let me know if any mistake stands out to you? Thanks a ton!