mjskay / tidybayes

Bayesian analysis + tidy data + geoms (R package)
http://mjskay.github.io/tidybayes
GNU General Public License v3.0
726 stars 59 forks source link

Error when running tidybayes-residuals.Rmd and tidybayes.Rmd #328

Closed barracuda156 closed 1 month ago

barracuda156 commented 1 month ago
--->  Testing R-tidybayes
Executing:  cd "/opt/local/var/macports/build/_opt_PPCSnowLeopardPorts_R_R-tidybayes/R-tidybayes/work/tidybayes" && /opt/local/bin/R CMD check ./tidybayes_3.0.7.tar.gz --no-manual --no-build-vignettes 
* using log directory ‘/opt/local/var/macports/build/_opt_PPCSnowLeopardPorts_R_R-tidybayes/R-tidybayes/work/tidybayes/tidybayes.Rcheck’
* using R version 4.4.1 (2024-06-14)
* using platform: powerpc-apple-darwin10.0.0d2 (32-bit)
* R was compiled by
    gcc-mp-13 (MacPorts gcc13 13.3.0_0+stdlib_flag) 13.3.0
    GNU Fortran (MacPorts gcc13 13.3.0_0+stdlib_flag) 13.3.0
* running under: OS X Snow Leopard 10.6
* using session charset: UTF-8
* using options ‘--no-manual --no-build-vignettes’
* checking for file ‘tidybayes/DESCRIPTION’ ... OK
* this is package ‘tidybayes’ version ‘3.0.7’
* package encoding: UTF-8
* checking package namespace information ... OK
* checking package dependencies ... NOTE
Package suggested but not available for checking: ‘gifski’
* checking if this is a source package ... OK
* checking if there is a namespace ... OK
* checking for executable files ... OK
* checking for hidden files and directories ... OK
* checking for portable file names ... OK
* checking for sufficient/correct file permissions ... OK
* checking whether package ‘tidybayes’ can be installed ... OK
* checking installed package size ... OK
* checking package directory ... OK
* checking ‘build’ directory ... OK
* checking DESCRIPTION meta-information ... OK
* checking top-level files ... OK
* checking for left-over files ... OK
* checking index information ... OK
* checking package subdirectories ... OK
* checking code files for non-ASCII characters ... OK
* checking R files for syntax errors ... OK
* checking whether the package can be loaded ... OK
* checking whether the package can be loaded with stated dependencies ... OK
* checking whether the package can be unloaded cleanly ... OK
* checking whether the namespace can be loaded with stated dependencies ... OK
* checking whether the namespace can be unloaded cleanly ... OK
* checking dependencies in R code ... OK
* checking S3 generic/method consistency ... OK
* checking replacement functions ... OK
* checking foreign function calls ... OK
* checking R code for possible problems ... OK
* checking Rd files ... OK
* checking Rd metadata ... OK
* checking Rd cross-references ... OK
* checking for missing documentation entries ... OK
* checking for code/documentation mismatches ... OK
* checking Rd \usage sections ... OK
* checking Rd contents ... OK
* checking for unstated dependencies in examples ... OK
* checking installed files from ‘inst/doc’ ... OK
* checking files in ‘vignettes’ ... OK
* checking examples ... OK
* checking for unstated dependencies in ‘tests’ ... OK
* checking tests ...
  Running ‘testthat.R’
 OK
* checking for unstated dependencies in vignettes ... OK
* checking package vignettes ... OK
* checking running R code from vignettes ...
  ‘tidy-brms.Rmd’ using ‘UTF-8’... OK
  ‘tidy-posterior.Rmd’ using ‘UTF-8’... OK
  ‘tidy-rstanarm.Rmd’ using ‘UTF-8’... OK
  ‘tidybayes-residuals.Rmd’ using ‘UTF-8’... failed
  ‘tidybayes.Rmd’ using ‘UTF-8’... failed
 ERROR
Errors in running code in vignettes:
when running code in ‘tidybayes-residuals.Rmd’
  ...
Chain 4:                97.217 seconds (Sampling)
Chain 4:                178.892 seconds (Total)
Chain 4: 

> cens_df_o %>% add_residual_draws(m_o) %>% median_qi(.residual) %>% 
+     ggplot(aes(sample = .residual)) + geom_qq() + geom_qq_line()

  When sourcing ‘tidybayes-residuals.R’:
Error: Predictive errors are not defined for ordinal or categorical models.
Execution halted
when running code in ‘tidybayes.Rmd’
  ...

$n
[1] 50

> m = sampling(ABC_stan, data = compose_data(ABC), control = list(adapt_delta = 0.99))

  When sourcing ‘tidybayes.R’:
Error: error in evaluating the argument 'object' in selecting a method for function 'sampling': object 'ABC_stan' not found
Execution halted

* checking re-building of vignette outputs ... SKIPPED
* DONE

Status: 1 ERROR, 1 NOTE

Output for related vignettes:


> params <- list(EVAL = FALSE)

> if (requireNamespace("pkgdown", quietly = TRUE) && 
+     pkgdown::in_pkgdown()) {
+     tiny_width = small_width = med_width = 6.75
+     tiny_heig .... [TRUNCATED] 

> knitr::opts_chunk$set(fig.width = small_width, fig.height = small_height, 
+     eval = if (isTRUE(exists("params"))) params$EVAL else FALSE)

> if (capabilities("cairo") && Sys.info()[["sysname"]] != 
+     "Darwin") {
+     knitr::opts_chunk$set(dev.args = list(png = list(type = "cairo")))
 .... [TRUNCATED] 

> dir.create("models", showWarnings = FALSE)

> library(dplyr)

Attaching package: ‘dplyr’

The following objects are masked from ‘package:stats’:

    filter, lag

The following objects are masked from ‘package:base’:

    intersect, setdiff, setequal, union

> library(purrr)

> library(tidyr)

> library(ggdist)

> library(tidybayes)

> library(ggplot2)

> library(cowplot)

> library(rstan)
Loading required package: StanHeaders

rstan version 2.32.6 (Stan version 2.32.2)

For execution on a local, multicore CPU with excess RAM we recommend calling
options(mc.cores = parallel::detectCores()).
To avoid recompilation of unchanged Stan programs, we recommend calling
rstan_options(auto_write = TRUE)
For within-chain threading using `reduce_sum()` or `map_rect()` Stan functions,
change `threads_per_chain` option:
rstan_options(threads_per_chain = 1)

Attaching package: ‘rstan’

The following object is masked from ‘package:tidyr’:

    extract

> library(brms)
Loading required package: Rcpp
Loading 'brms' package (version 2.21.8). Useful instructions
can be found by typing help('brms'). A more detailed introduction
to the package is available through vignette('brms_overview').

Attaching package: ‘brms’

The following object is masked from ‘package:rstan’:

    loo

The following objects are masked from ‘package:tidybayes’:

    dstudent_t, pstudent_t, qstudent_t, rstudent_t

The following objects are masked from ‘package:ggdist’:

    dstudent_t, pstudent_t, qstudent_t, rstudent_t

The following object is masked from ‘package:stats’:

    ar

> library(gganimate)

> theme_set(theme_tidybayes() + panel_border())

> rstan_options(auto_write = TRUE)

> options(mc.cores = 1)

> options(width = 120)

> set.seed(4118)

> n = 100

> cens_df = tibble(y_star = rnorm(n, 0.5, 1), y_lower = floor(y_star), 
+     y_upper = ceiling(y_star), censoring = "interval")

> head(cens_df, 10)
# A tibble: 10 × 4
    y_star y_lower y_upper censoring
     <dbl>   <dbl>   <dbl> <chr>    
 1  0.180        0       1 interval 
 2  1.05         1       2 interval 
 3  2.05         2       3 interval 
 4 -0.512       -1       0 interval 
 5  0.0323       0       1 interval 
 6  1.18         1       2 interval 
 7 -0.707       -1       0 interval 
 8  0.116        0       1 interval 
 9  0.183        0       1 interval 
10  0.385        0       1 interval 

> uncensored_plot = cens_df %>% ggplot(aes(y = "", x = y_star)) + 
+     stat_slab() + geom_jitter(aes(y = 0.75, color = ordered(y_lower)), 
+     pos .... [TRUNCATED] 

> censored_plot = cens_df %>% ggplot(aes(y = "", x = (y_lower + 
+     y_upper)/2)) + geom_dotplot(aes(fill = ordered(y_lower)), 
+     method = "hist ..." ... [TRUNCATED] 

> plot_grid(align = "v", ncol = 1, rel_heights = c(1, 
+     2.5), uncensored_plot, censored_plot)

> m_ideal = brm(y_star ~ 1, data = cens_df, family = student, 
+     file = "models/tidybayes-residuals_m_ideal.rds")
Compiling Stan program...
Start sampling

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.00029 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 2.9 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 0.666 seconds (Warm-up)
Chain 1:                0.628 seconds (Sampling)
Chain 1:                1.294 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 9.1e-05 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.91 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 0.685 seconds (Warm-up)
Chain 2:                0.514 seconds (Sampling)
Chain 2:                1.199 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 8.8e-05 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.88 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 0.59 seconds (Warm-up)
Chain 3:                0.547 seconds (Sampling)
Chain 3:                1.137 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 9e-05 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.9 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 0.562 seconds (Warm-up)
Chain 4:                0.531 seconds (Sampling)
Chain 4:                1.093 seconds (Total)
Chain 4: 

> m_ideal
 Family: student 
  Links: mu = identity; sigma = identity; nu = identity 
Formula: y_star ~ 1 
   Data: cens_df (Number of observations: 100) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Regression Coefficients:
          Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept     0.51      0.11     0.30     0.73 1.00     2541     2528

Further Distributional Parameters:
      Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma     1.02      0.09     0.85     1.20 1.00     2894     2092
nu       21.27     13.03     5.58    55.59 1.00     2823     2717

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).

> cens_df %>% add_residual_draws(m_ideal) %>% ggplot(aes(x = .row, 
+     y = .residual)) + stat_pointinterval()

> cens_df %>% add_residual_draws(m_ideal) %>% median_qi() %>% 
+     ggplot(aes(sample = .residual)) + geom_qq() + geom_qq_line()

> cens_df %>% add_predicted_draws(m_ideal) %>% summarise(p_residual = mean(.prediction < 
+     y_star), z_residual = qnorm(p_residual), .groups = "dr ..." ... [TRUNCATED] 

> m = brm(y_lower | cens(censoring, y_upper) ~ 1, data = cens_df, 
+     file = "models/tidybayes-residuals_m.rds")
Compiling Stan program...
Start sampling

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.000518 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 5.18 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 2.165 seconds (Warm-up)
Chain 1:                1.681 seconds (Sampling)
Chain 1:                3.846 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.000437 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 4.37 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 2.037 seconds (Warm-up)
Chain 2:                2.275 seconds (Sampling)
Chain 2:                4.312 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 0.000412 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 4.12 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 2.035 seconds (Warm-up)
Chain 3:                1.924 seconds (Sampling)
Chain 3:                3.959 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 0.000441 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 4.41 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 1.974 seconds (Warm-up)
Chain 4:                2.208 seconds (Sampling)
Chain 4:                4.182 seconds (Total)
Chain 4: 

> m
 Family: gaussian 
  Links: mu = identity; sigma = identity 
Formula: y_lower | cens(censoring, y_upper) ~ 1 
   Data: cens_df (Number of observations: 100) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Regression Coefficients:
          Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept     0.56      0.11     0.33     0.78 1.00     3182     2588

Further Distributional Parameters:
      Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma     1.09      0.09     0.94     1.28 1.00     3304     2517

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).

> cens_df %>% add_residual_draws(m) %>% ggplot(aes(x = .row, 
+     y = .residual)) + stat_pointinterval()
Warning: Results may not be meaningful for censored models.

> cens_df %>% add_residual_draws(m) %>% median_qi(.residual) %>% 
+     ggplot(aes(sample = .residual)) + geom_qq() + geom_qq_line()
Warning: Results may not be meaningful for censored models.

> cens_df %>% add_predicted_draws(m) %>% summarise(p_lower = mean(.prediction < 
+     y_lower), p_upper = mean(.prediction < y_upper), p_residual = r .... [TRUNCATED] 

> cens_df %>% add_predicted_draws(m) %>% summarise(p_lower = mean(.prediction < 
+     y_lower), p_upper = mean(.prediction < y_upper), p_residual = r .... [TRUNCATED] 

> k = 10

> p = cens_df %>% add_predicted_draws(m) %>% summarise(p_lower = mean(.prediction < 
+     y_lower), p_upper = mean(.prediction < y_upper), p_residual .... [TRUNCATED] 

> animate(p, nframes = k, width = 384, height = 384, 
+     units = "px", res = 96, dev = "ragg_png")
# A tibble: 10 × 7
   format width height colorspace matte filesize density
   <chr>  <int>  <int> <chr>      <lgl>    <int> <chr>  
 1 gif      384    384 sRGB       FALSE        0 38x38  
 2 gif      384    384 sRGB       TRUE         0 38x38  
 3 gif      384    384 sRGB       TRUE         0 38x38  
 4 gif      384    384 sRGB       TRUE         0 38x38  
 5 gif      384    384 sRGB       TRUE         0 38x38  
 6 gif      384    384 sRGB       TRUE         0 38x38  
 7 gif      384    384 sRGB       TRUE         0 38x38  
 8 gif      384    384 sRGB       TRUE         0 38x38  
 9 gif      384    384 sRGB       TRUE         0 38x38  
10 gif      384    384 sRGB       TRUE         0 38x38  

> anim_save("tidybayes-residuals_resid_hops_1.gif")

> cat("![](tidybayes-residuals_resid_hops_1.gif)\n")
![](tidybayes-residuals_resid_hops_1.gif)

> set.seed(41181)

> n = 100

> cens_df_t = tibble(y = rt(n, 3) + 0.5, y_lower = floor(y), 
+     y_upper = ceiling(y), censoring = "interval")

> uncensored_plot = cens_df_t %>% ggplot(aes(y = "", 
+     x = y)) + stat_slab() + geom_jitter(aes(y = 0.75, color = ordered(y_lower)), 
+     positi .... [TRUNCATED] 

> censored_plot = cens_df_t %>% ggplot(aes(y = "", x = (y_lower + 
+     y_upper)/2)) + geom_dotplot(aes(fill = ordered(y_lower)), 
+     method = "hi ..." ... [TRUNCATED] 

> plot_grid(align = "v", ncol = 1, rel_heights = c(1, 
+     2.25), uncensored_plot, censored_plot)

> m_t1 = brm(y_lower | cens(censoring, y_upper) ~ 1, 
+     data = cens_df_t, file = "models/tidybayes-residuals_m_t1")
Compiling Stan program...
recompiling to avoid crashing R session
Start sampling

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1: Rejecting initial value:
Chain 1:   Log probability evaluates to log(0), i.e. negative infinity.
Chain 1:   Stan can't start sampling from this initial value.
Chain 1: 
Chain 1: Gradient evaluation took 0.000526 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 5.26 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 2.031 seconds (Warm-up)
Chain 1:                2.463 seconds (Sampling)
Chain 1:                4.494 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.000435 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 4.35 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 2.009 seconds (Warm-up)
Chain 2:                2.149 seconds (Sampling)
Chain 2:                4.158 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 0.000416 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 4.16 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 2.081 seconds (Warm-up)
Chain 3:                2.088 seconds (Sampling)
Chain 3:                4.169 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 0.000431 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 4.31 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 2.178 seconds (Warm-up)
Chain 4:                1.534 seconds (Sampling)
Chain 4:                3.712 seconds (Total)
Chain 4: 

> cens_df_t %>% add_residual_draws(m_t1) %>% median_qi(.residual) %>% 
+     ggplot(aes(sample = .residual)) + geom_qq() + geom_qq_line()
Warning: Results may not be meaningful for censored models.

> cens_df_t %>% add_predicted_draws(m_t1) %>% summarise(p_lower = mean(.prediction < 
+     y_lower), p_upper = mean(.prediction < y_upper), p_residua .... [TRUNCATED] 
Warning: Removed 1 row containing non-finite outside the scale range (`stat_qq()`).

> m_t2 = brm(y_lower | cens(censoring, y_upper) ~ 1, 
+     data = cens_df_t, family = student, file = "models/tidybayes-residuals_m_t2.rds")
Compiling Stan program...
Start sampling

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.010715 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 107.15 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 61.253 seconds (Warm-up)
Chain 1:                50.158 seconds (Sampling)
Chain 1:                111.411 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.008259 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 82.59 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 58.968 seconds (Warm-up)
Chain 2:                59.041 seconds (Sampling)
Chain 2:                118.009 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 0.006383 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 63.83 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 58.008 seconds (Warm-up)
Chain 3:                58.24 seconds (Sampling)
Chain 3:                116.248 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 0.004997 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 49.97 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 60.529 seconds (Warm-up)
Chain 4:                58.866 seconds (Sampling)
Chain 4:                119.395 seconds (Total)
Chain 4: 

> cens_df_t %>% add_residual_draws(m_t2) %>% median_qi(.residual) %>% 
+     ggplot(aes(sample = .residual)) + geom_qq() + geom_qq_line()
Warning: Results may not be meaningful for censored models.

> cens_df_t %>% add_predicted_draws(m_t2) %>% summarise(p_lower = mean(.prediction < 
+     y_lower), p_upper = mean(.prediction < y_upper), p_residua .... [TRUNCATED] 

> cens_df_o = cens_df_t %>% mutate(y_factor = ordered(y_lower))

> m_o = brm(y_factor ~ 1, data = cens_df_o, family = cumulative, 
+     prior = prior(normal(0, 10), class = Intercept), control = list(adapt_delta =  .... [TRUNCATED] 
Compiling Stan program...
Start sampling

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.000814 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 8.14 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 83.853 seconds (Warm-up)
Chain 1:                95.651 seconds (Sampling)
Chain 1:                179.504 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.000731 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 7.31 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 92.779 seconds (Warm-up)
Chain 2:                107.114 seconds (Sampling)
Chain 2:                199.893 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 0.000735 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 7.35 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 79.798 seconds (Warm-up)
Chain 3:                49.009 seconds (Sampling)
Chain 3:                128.807 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 0.000742 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 7.42 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 81.675 seconds (Warm-up)
Chain 4:                97.217 seconds (Sampling)
Chain 4:                178.892 seconds (Total)
Chain 4: 

> cens_df_o %>% add_residual_draws(m_o) %>% median_qi(.residual) %>% 
+     ggplot(aes(sample = .residual)) + geom_qq() + geom_qq_line()

  When sourcing ‘tidybayes-residuals.R’:
Error: Predictive errors are not defined for ordinal or categorical models.
Execution halted
Error in eval(x, envir = envir) : object 'eval_chunks' not found
Error in eval(x, envir = envir) : object 'eval_chunks' not found
Error in eval(x, envir = envir) : object 'eval_chunks' not found
Error in eval(x, envir = envir) : object 'eval_chunks' not found
Error in eval(x, envir = envir) : object 'eval_chunks' not found

> params <- list(EVAL = FALSE)

> if (requireNamespace("pkgdown", quietly = TRUE) && 
+     pkgdown::in_pkgdown()) {
+     tiny_width = small_width = med_width = 6.75
+     tiny_heig .... [TRUNCATED] 

> eval_chunks = if (isTRUE(exists("params"))) params$EVAL else FALSE

> knitr::opts_chunk$set(fig.width = small_width, fig.height = small_height, 
+     eval = eval_chunks)

> if (capabilities("cairo") && Sys.info()[["sysname"]] != 
+     "Darwin") {
+     knitr::opts_chunk$set(dev.args = list(png = list(type = "cairo")))
 .... [TRUNCATED] 

> dir.create("models", showWarnings = FALSE)

> library(magrittr)

> library(dplyr)

Attaching package: ‘dplyr’

The following objects are masked from ‘package:stats’:

    filter, lag

The following objects are masked from ‘package:base’:

    intersect, setdiff, setequal, union

> library(forcats)

> library(modelr)

> library(ggdist)

> library(tidybayes)

> library(ggplot2)

> library(cowplot)

> library(broom)

Attaching package: ‘broom’

The following object is masked from ‘package:modelr’:

    bootstrap

> library(rstan)
Loading required package: StanHeaders

rstan version 2.32.6 (Stan version 2.32.2)

For execution on a local, multicore CPU with excess RAM we recommend calling
options(mc.cores = parallel::detectCores()).
To avoid recompilation of unchanged Stan programs, we recommend calling
rstan_options(auto_write = TRUE)
For within-chain threading using `reduce_sum()` or `map_rect()` Stan functions,
change `threads_per_chain` option:
rstan_options(threads_per_chain = 1)

Attaching package: ‘rstan’

The following object is masked from ‘package:magrittr’:

    extract

> library(rstanarm)
Loading required package: Rcpp
This is rstanarm version 2.32.1
- See https://mc-stan.org/rstanarm/articles/priors for changes to default priors!
- Default priors may change, so it's safest to specify priors, even if equivalent to the defaults.
- For execution on a local, multicore CPU with excess RAM we recommend calling
  options(mc.cores = parallel::detectCores())

Attaching package: ‘rstanarm’

The following object is masked from ‘package:rstan’:

    loo

> library(brms)
Loading 'brms' package (version 2.21.8). Useful instructions
can be found by typing help('brms'). A more detailed introduction
to the package is available through vignette('brms_overview').

Attaching package: ‘brms’

The following objects are masked from ‘package:rstanarm’:

    dirichlet, exponential, get_y, lasso, ngrps

The following object is masked from ‘package:rstan’:

    loo

The following objects are masked from ‘package:tidybayes’:

    dstudent_t, pstudent_t, qstudent_t, rstudent_t

The following objects are masked from ‘package:ggdist’:

    dstudent_t, pstudent_t, qstudent_t, rstudent_t

The following object is masked from ‘package:stats’:

    ar

> library(bayesplot)
This is bayesplot version 1.11.1
- Online documentation and vignettes at mc-stan.org/bayesplot
- bayesplot theme set to bayesplot::theme_default()
   * Does _not_ affect other ggplot2 plots
   * See ?bayesplot_theme_set for details on theme setting

Attaching package: ‘bayesplot’

The following object is masked from ‘package:brms’:

    rhat

> library(RColorBrewer)

> theme_set(theme_tidybayes() + panel_border())

> rstan_options(auto_write = TRUE)

> options(mc.cores = 1)

> options(width = 120)

> set.seed(5)

> n = 10

> n_condition = 5

> ABC = tibble(condition = factor(rep(c("A", "B", "C", 
+     "D", "E"), n)), response = rnorm(n * 5, c(0, 1, 2, 1, -1), 
+     0.5))

> head(ABC, 10)
# A tibble: 10 × 2
   condition response
   <fct>        <dbl>
 1 A           -0.420
 2 B            1.69 
 3 C            1.37 
 4 D            1.04 
 5 E           -0.144
 6 A           -0.301
 7 B            0.764
 8 C            1.68 
 9 D            0.857
10 E           -0.931

> ABC %>% ggplot(aes(x = response, y = fct_rev(condition))) + 
+     geom_point(alpha = 0.5) + ylab("condition")

> compose_data(ABC)
$condition
 [1] 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5

$n_condition
[1] 5

$response
 [1] -0.42042774  1.69217967  1.37225407  1.03507138 -0.14427956 -0.30145399  0.76391681  1.68231434  0.85711318
[10] -0.93094589  0.61381517  0.59911027  1.45980370  0.92123282 -1.53588002 -0.06949307  0.70134345  0.90801662
[19]  1.12040863 -1.12967770  0.45025597  1.47093470  2.73398095  1.35338054 -0.59049553 -0.14674092  1.70929454
[28]  2.74938691  0.67145895 -1.42639772  0.15795752  1.55484708  3.10773029  1.60855182 -0.26038911  0.47578692
[37]  0.49523368  0.99976363  0.11890706 -1.07130406  0.77503018  0.59878841  1.96271054  1.94783398 -1.22828447
[46]  0.28111168  0.55649574  1.76987771  0.63783576 -1.03460558

$n
[1] 50

> m = sampling(ABC_stan, data = compose_data(ABC), control = list(adapt_delta = 0.99))

  When sourcing ‘tidybayes.R’:
Error: error in evaluating the argument 'object' in selecting a method for function 'sampling': object 'ABC_stan' not found
Execution halted
mjskay commented 1 month ago

Seems related to https://github.com/yihui/knitr/issues/2338, and also that the way I am skipping vignette rendering for long vignettes seems not properly cause the check for re-running vignette code to skip depending on R CMD CHECK options. I think the former should be fixed by passing purl = FALSE on chunks with error = TRUE and the latter by not using an RMarkdown parameter to determine the value of eval (since that parameter value is getting evaluated and written into the vignette code file instead of recalculated). I'll see what I can do.

mjskay commented 1 month ago

I think I was able to re-create your test environment and address the issue. Should be fixed on master, let me know if it's still an issue.