Open rmcminds opened 5 years ago
could still avoid kronecker by either (1) iterating across one of the dimensions (not sure about speed tradeoff) or (2) I should actually be able to use the existing bernoulli_logit_glm_lpmf.hpp code to create my own custom version that allows matrix multiplication on both dimensions. First stabs proved I don't understand C++ though. Also, I believe this likelihood calculation is not currently the biggest bottleneck in my model - it's all the matrix multiplications in the transformed parameters block. So this could be helpful but might not make a big difference at this point.
stan 2.19 introduced functions that automatically calculate log likelihood for glms, combining the likelihood step with the previously separate matrix multiplication step. the calculations are even passed to the GPU for possible drastic speedups (~30x??). Using these functions will require me to go back to the original Kronecker product implementation rather than my current 2-step matrix multiplication on two dimensions. Shouldn't be too hard; only need to make sure matrix->vector conversions match the order of the kronecker product (which I can easily precalculate in R).