Open mhollanders opened 4 months ago
Thanks for reporting this, that's strange. I'm glad at least loo::loo(fit_mvn$draws("log_lik"))
works so this doesn't prevent you from using loo. I'm not sure where that error is coming from. I don't think loo or cmdstanr has a line of code containing if (varx == 0)
so I'm guessing this is coming from another package that's being used internally? Do you by any chance have the traceback() available so we can see more detail about the source of the error message?
Hey, thanks for the help. traceback() gives me the following:
> fit$loo()
Error in if (varx == 0) { : missing value where TRUE/FALSE needed
> traceback()
8: posterior::autocovariance(sims[, i])
7: FUN(X[[i]], ...)
6: lapply(1:chains, FUN = function(i) posterior::autocovariance(sims[,
i]))
5: FUN(array(newX[, i], d.call, dn.call), ...)
4: apply(x, 3, ess_rfun)
3: relative_eff.array(exp(LLarray), cores = r_eff_cores)
2: loo::relative_eff(exp(LLarray), cores = r_eff_cores)
1: fit$loo()
Also, I think the same error is causing the following problem with the priorsense packages:
> priorsense::powerscale_plot_dens(fit, "psi_a")
Error in if (k < 1) { : missing value where TRUE/FALSE needed
In addition: Warning message:
Can't fit generalized Pareto distribution because all tail values are the same.
> traceback()
12: ps_min_ss(k)
11: .pareto_smooth_extra_diags(k, S)
10: pareto_smooth.default(exp(as.numeric(log_ratios - max(log_ratios))),
r_eff = NULL, return_k = TRUE, extra_diags = TRUE, verbose = FALSE)
9: posterior::pareto_smooth(exp(as.numeric(log_ratios - max(log_ratios))),
r_eff = NULL, return_k = TRUE, extra_diags = TRUE, verbose = FALSE)
8: powerscale.priorsense_data(x = x, variable = variable, component = scaled_component,
alpha = alpha_seq[i], moment_match = moment_match, k_treshold = k_threshold,
resample = resample, transform = transform, prediction = prediction,
selection = likelihood_selection, ...)
7: powerscale(x = x, variable = variable, component = scaled_component,
alpha = alpha_seq[i], moment_match = moment_match, k_treshold = k_threshold,
resample = resample, transform = transform, prediction = prediction,
selection = likelihood_selection, ...)
6: powerscale_sequence.priorsense_data(psd, lower_alpha = lower_alpha,
upper_alpha = upper_alpha, length = length, variable = variable,
component = component, moment_match = moment_match, k_threshold = k_threshold,
resample = resample, transform = transform, prediction = prediction,
auto_alpha_range = auto_alpha_range, symmetric = symmetric,
prior_selection = prior_selection, likelihood_selection = likelihood_selection,
...)
5: powerscale_sequence(psd, lower_alpha = lower_alpha, upper_alpha = upper_alpha,
length = length, variable = variable, component = component,
moment_match = moment_match, k_threshold = k_threshold, resample = resample,
transform = transform, prediction = prediction, auto_alpha_range = auto_alpha_range,
symmetric = symmetric, prior_selection = prior_selection,
likelihood_selection = likelihood_selection, ...)
4: powerscale_sequence.default(x, length = length, ...)
3: powerscale_sequence(x, length = length, ...)
2: powerscale_plot_dens.default(fit, "psi_a")
1: priorsense::powerscale_plot_dens(fit, "psi_a")
Is it likely to be an error I've made with the log_lik
variable in the generated quantities
?
Ah, I think this might be an issue with calling loo::relative_eff
(it might be returning Inf
or NaN
or something like that in this case). Do you still get an error if you do fit_mvn$loo(r_eff = FALSE)
?
Ah, I think this might be an issue with calling
loo::relative_eff
(it might be returningInf
orNaN
or something like that in this case). Do you still get an error if you dofit_mvn$loo(r_eff = FALSE)
?
@avehtari In cmdstanr's loo method we have:
r_eff <- loo::relative_eff(exp(LLarray), cores = r_eff_cores)
If the issue is that the log ratios are too small, should we change it to something like this?
r_eff <- loo::relative_eff(exp(LLarray + max(-LLarray)), cores = r_eff_cores)
Hey @jgabry, I don't get the error with r_eff = FALSE
. Thanks so much!
FWIW, when I try to do stacking I also get the following error:
> loo_list <- list(loo(fit1$draws("log_lik")), loo(fit2$draws("log_lik")))
Warning messages:
1: Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.
2: Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.
> loo_model_weights(loo_list)
Error in optim(theta.old, fun, gradient, control = control, method = method, :
initial value in 'vmmin' is not finite
loo_compare()
doesn't give any errors.
It seems to have something to do with the initial values used for the optimization step in the stacking algorithm (I guess you could try changing the optimization method that's used, there's an argument for that, but not sure it will make a difference). This might be hard for me to debug without an example I can play with. Or maybe @avehtari or @yao-yl have an idea.
If the issue is that the log ratios are too small, should we change it to something like this?
Yes
It seems all the reported problems come from at least for one of the LOO folds, the importance ratios under- or overflow (fixed by subtracting the max) and the one importance ratio dominating (subtracting the max is not enough). It should be possible to add additional checks to avoid errors, but the resulting comparisons and model weights are still unreliable with that bad importance ratio distributions. @mhollanders can you share the log_lik draws?
Hi @avehtari, sorry for taking so long with this. I've attached the draws, hopefully a .csv is fine (I wasn't able to send the draws_matrix
as .rds).
log_lik.csv
@mhollanders a fix was already merged, can you try by installing loo from github?
Hey @avehtari, I installed loo from Github and also updated cmdstanr to the development version. I'm still getting the following:
> fit1$loo()
Error in if (varx == 0) { : missing value where TRUE/FALSE needed
and:
> loo1 <- loo(fit1$draws("log_lik"))
Warning message:
Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.
> loo2 <- loo(fit2$draws("log_lik"))
Warning message:
Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.
> loo::loo_model_weights(list(loo1, loo2))
Error in optim(theta.old, fun, gradient, control = control, method = method, :
initial value in 'vmmin' is not finite
@mhollanders is there any way you could share the fit1 object (Dropbox or something?) so that I could test fit1$loo()
?
With the log_lik.csv
you provided I was able to find the problem in stacking_weights()
, and I have a fix that changes the error to warning, but that doesn't change the fact that you have way too many very high Pareto-k's and the model weights are thus not useful.
Hey @avehtari, sorry for taking so long to respond. I have a link here that you should be able to download it with.
Re: the high Pareto-k's, I'm finding these to be super high even where the model recovers the DGP input when there's site-level random effects. I didn't realise you couldn't still use the stacking to get the best predictive performance, irrespective of the Pareto-k.
Hey @avehtari, sorry to re-hash this, but it's still an issue where loo::loo() works fine but fit$loo() does not. I've re-attached a csv of posterior draws, as well as a figure of the site-level log_liks separated by region to visualise.
Hi,
Unfortunately I don't have a reproducible example because this is just popping up with a few versions of a big model. When I fit the model with cmdstanr, the following works:
But this doesn't:
I checked and there's no NAs anywhere:
Does anyone have any idea? Sorry I can't be more helpful with reproducible code. I'd be happy to share the Stan program with simulation file.
Thanks,
Matt