Closed lorenzo-zuccato closed 11 months ago
Thanks for yet another good point. I'm actually working on a major refactoring at the moment, which unfortunately is not yet ready to go into the master branch. However, if you install the version in the branch refactoring_v2_0
link, the following code will perform the thinning properly:
library(BayesMallows)
model_fit <- compute_mallows(
data = setup_rank_data(sushi_rankings),
model = set_model_options(n_clusters = 3),
compute_options = set_compute_options(nmc = 2000, clus_thinning = 10))
length(unique(model_fit$cluster_assignment$iteration))
#> [1] 200
length(unique(model_fit$cluster_probs$iteration))
#> [1] 200
model_fit <- compute_mallows(
data = setup_rank_data(sushi_rankings),
model = set_model_options(n_clusters = 3),
compute_options = set_compute_options(nmc = 2000, clus_thinning = 1))
length(unique(model_fit$cluster_assignment$iteration))
#> [1] 2000
length(unique(model_fit$cluster_probs$iteration))
#> [1] 2000
Created on 2023-11-13 with reprex v2.0.2
I hence won't add this to the package in the master branch at the moment, but the complete new version of the package should definitely be ready by the end of the year.
Ok thank you so much! :)
Hi! I've seen that in tidy_mcmc you use alpha_jump and other thinning parameters to make the iteration numbering self-consistent. However this does not happen with the cluster_assignment (and cluster_probabilities). Can we make the result of tidy_mcmc more coherent by using clus_thin or is there a reason for it not to be used?