yihui / knitr

A general-purpose tool for dynamic report generation in R
https://yihui.org/knitr/
2.37k stars 874 forks source link

Unbalanced chunk delimiters in package vignettes #2057

Closed yihui closed 2 years ago

yihui commented 2 years ago

Currently the following CRAN packages have vignettes that contain unbalanced code chunk delimiters (e.g., a chunk opened by five backticks but closed by four, or opened by three but closed by four, or the chunk header is indented but the footer is not, etc.). I haven't decided what to do with them yet (chances are I'll still support them but gradually make unbalanced delimiters defunct in the future).

The fixes should be simple enough to most package authors (e.g., https://github.com/davidski/evaluator/pull/57), but I don't want to push them at the moment (this breaking change may happen in a year from now).

Without balancing the delimiters, users may see an obscure error message like this:

Error in parse(text = x, srcfile = src) : 
  attempt to use zero-length variable name

The error message will be clearer if you install the dev version of knitr and rebuild the vignette:

remotes::install_github('yihui/knitr')

Then you should see a message like this:

The closing backticks on line 292 ("````") in foo.Rmd do not match the opening backticks "```" on line 285. You are recommended to fix either the opening or closing delimiter of the code chunk to use exactly the same numbers of backticks and same level of indentation (or blockquote).


ACSNMineR

Maintainer: Paul Deveau <paul.deveau@...>

vignettes/ACSN-vignette.Rmd (Lines 69-73)

`````{r gmt_map_show, echo = FALSE}
knitr::kable(names(ACSNMineR::ACSN_maps))
````

````{r gmt_access_code, eval = FALSE}

CoordinateCleaner

Maintainer: Alexander Zizka <alexander.zizka@...>

vignettes/Cleaning_PBDB_fossils_with_CoordinateCleaner.Rmd (Lines 70-86)

````{r, eval = TRUE, collapse = T}
#plot data to get an overview
wm <- borders("world", colour="gray50", fill="gray50")
ggplot()+ coord_fixed()+ wm +
  geom_point(data = dat, aes(x = lng, y = lat),
             colour = "darkred", size = 0.5)+
  theme_bw()

```

# CoordinateCleaner
 CoordianteCleaner includes a suite of automated tests to identify problems common to biological and palaebiological databases.

## Spatial issues
We'll first check coordinate validity to check if all coordinates are numeric and part of a lat/lon coordinate reference system using `cc_val`.

```{r, eval = TRUE}

FateID

Maintainer: Dominic Grün <dominic.gruen@...>

vignettes/FateID.Rmd (Lines 193-204)

```{r}
pr  <- prcurve(y,fb,dr,k=2,m="umap",trthr=0.33,start=3)
````

This function has the same input arguments as `plotFateMap` and is invoked by the `plotFateMap` function.

##  Inspecting pseudo-temporal gene expression changes

FateID also provides functions for the visualization and analysis of pseudo-temporal gene expression changes.
For this purpose, cells with a fate bias towards a target cluster can be extracted. The principal curve analysis returns all cells along a differentiation trajectory in pseudo-temporal order. For example, cells with a fate bias towards cluster 6 in pseudo-temporal order can be extracted by the following command:

```{r}

GEVACO

Maintainer: Sydney Manning <sydneymanning@...>

vignettes/GEVACO_Intro.Rmd (Lines 23-39)

````{r, results="hide", warning=FALSE, message=FALSE}
library(GEVACO) # load the library
```

## Data requirements
At minimum to run this analysis you need a file storing genotype information and a covariate/trait file. 

The covariate/trait file should be text based and can have as many columns/covariates 
as desired, but the first few must be in a specific order.

 * Column 1: Phenotype

 * Column 2: Environmental factor

 * Columns 3+: Additional covariates

````{r}

GeoLight

Maintainer: Simeon Lisovski <simeon.lisovski@...>

vignettes/GeoLight2.0.Rmd (Lines 107-115)

```{r, fig.height=8, fig.width = 6}
siteMap(crds = crds1, site = cL$site, xlim = c(-12, 25), ylim = c(0, 50))
````

Obviously, the `changeLight` function defined many breakpoints during periods of residency (e.g. c-e). This can happen very often, and is potentially due to occasional deviations from the 'normal' shading intensity (e.g. severe weather).

The **(4) major change** in `GeoLight` (Version 2.0) is the introduction of a new function called `mergeSites`. The function uses an optimization routine to fit sunrise and sunset patterns from within the range of the coordinates and across each stationary period to the observed sunrise and sunset times. Based on the optimization of longitude and latitude, the function uses a forward selection process to merge sites that are closer than the defined threshold (`distThreshold`, in km). The output plot shows the initially selected sites (e.g. calculated via `changeLight`) and the new site selection (red line). Furthermore, the best fitting (plus the 95 confidence intervals) theoretical sunrise and sunset patterns are shown below the observed data. And finally the longitude and the latitude values of the track are plotted separately with the initial and the new borders of the residency/movement periods.

```{r, fig.height=10, fig.width=7}

IceCast

Maintainer: Hannah M. Director <direch@...>

vignettes/FitAndGenerateContours.Rmd (Lines 373-410)

```{r plot ppe prob, fig.height = 5, fig.width  = 6}
ppe_prob_vis <- ppe_prob #convert na's to a number for visualization (only!)
ppe_prob_vis[is.na(ppe_prob)] <- 1 + 1/n_color
par(oma = c(0, 0, 0, 4))
image(ppe_prob_vis, col = c(colors, "grey"), xaxt = "n", yaxt = "n",
      main = sprintf("Post-Processed Ensemble Probabilistic Forecast
                  \n Month: %i, Year: %i, Initialized Month: %i",
                   month, forecast_year, init_month))
legend("topright", fill = c("grey"), legend = c("land"))
par(oma = c(0, 0, 0, 1))
image.plot(ppe_prob, col = colors, legend.only = TRUE)
````

### References

Copernicus Climate Change Service (2019). Description of the c3s seasonal multi-system.https://confluence.ecmwf.int/display/COPSRV/Description+of+the+C3S+seasonal+multi-system

Comiso, J., 2017. Bootstrap sea ice concentrations from Nimbus-7 SMMR and 
DMSP SSM/I-SSMIS. version 3. Boulder, Colorado USA: NASA National Snow and
Ice Data Center Distributed Active Archive Center

Director, H. M., A. E. Raftery, and C.M Bitz, 2019+. Probabilistic Forecasting of
the Arctic Sea Ice Edge with Contour Modeling

Director, H. M., A.E. Raftery, and C. M. Bitz, 2017. "Improved Sea Ice Forecasting 
through Spatiotemporal Bias Correction." Journal of Climate 30.23: 9493-9510.

Nychka D., Furrer R., Paige J., Sain S. (2017). “fields: Tools for spatial data.”.
R package version 9.6, www.image.ucar.edu/~nychka/Fields.

Simon Garnier (2018). "viridis: Default Color Maps from 'matplotlib'". R package version 0.5.1.
  https://CRAN.R-project.org/package=viridis.

Sea Ice Prediction Network (2019). Sea ice prediction network
predictability portal. https://atmos.uw.edu/sipn/.

IntCal

Maintainer: Maarten Blaauw <maarten.blaauw@...>

vignettes/plots.Rmd (Lines 28-35)

```{r, fig.width=4, fig.asp=.8}
draw.ccurve(1600, 2020, BCAD=TRUE, cc2='nh1')

````

The postbomb curve dwarves the IntCal20 curve, so we could also plot both on separate vertical axes: 

```{r, fig.width=4, fig.asp=.8}

LBSPR

Maintainer: Adrian Hordyk <ar.hordyk@...>

vignettes/LBSPR.Rmd (Lines 196-215)

````{r}
MyPars@MK <- 1.5 
MyPars@SL50 <- 10
MyPars@SL95 <- 15 
MyPars@FM <- 1 
MySim <- LBSPRsim(MyPars)
round(MySim@SPR, 2) # SPR 

MyPars@SL50 <- 80
MyPars@SL95 <- 85 
MySim <- LBSPRsim(MyPars)
round(MySim@SPR, 2) # SPR 
```

### Control Options
There are a number of additional parameters that can be modified to control other aspects of the simulation model. 

For example, by default the LBSPR model using the Growth-Type-Group model (Hordyk et at. 2016).  The `Control` argument can be used to switch to the Age-Structured model (Hordyk et al. 2015a, b):

```{r}

LexisNexisTools

Maintainer: Johannes Gruber <j.gruber.1@...>

vignettes/demo.Rmd (Lines 12-22)

```{r setup, include = FALSE}
knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>"
)
library("kableExtra")
 ```
## Demo
### Load Package

```{r, message=FALSE}

MHTcop

Maintainer: Jonathan von Schroeder <jvs@...>

vignettes/fdr-test.Rmd (Lines 205-227)

```{r}
calcBounds <- function(cop) {
  upperBound <- (m0/m)*alpha
  cat("Calculating upper bound for the",cop@name,"copula ( m =",m,")\n")
  delta <- pbsapply(theta,function(theta)
   ac_fdr.calc_delta(copula::setTheta(cop,theta),m,m0,
   alpha=alpha,num.reps=NZ,calc.var=TRUE))
  sharperUpperBound <- upperBound * delta[1,]
  sharperUpperBound.var <- upperBound^2 * delta[2,]
  delta <- delta[1,]
  lowerBound <- sapply(theta,function(theta){alpha*(m0/m)*
   (1+pgamma(log(m)/(m^(theta)-1),shape=1/(theta),scale=1)-
      pgamma((log(m)*(m^(theta)))/(m^(theta)-1),shape=1/(theta),scale=1))})
  list(upperBound=upperBound,sharperUpperBound=sharperUpperBound,
       sharperUpperBound.var=sharperUpperBound.var,
       delta=delta,lowerBound=lowerBound)
}
  ```
### Simulation study to determine the FDR under model (16)

Samples from model (16) for use in a Monte Carlo study of the FDR using the delta calculated previously by `calc_bounds`:

```{r}

MPBoost

Maintainer: Ignacio López-de-Ullibarri <ignacio.lopezdeullibarri@...>

vignettes/mpboost.Rmd (Lines 20-41)

```{r, include = FALSE}
knitr::opts_chunk$set(collapse = TRUE, fig.width = 4.8, fig.height = 4.8)
````
# Introduction

The time-honored principle of randomization [@fisher1925] has found its way into experimentation in humans through the conduct of clinical trials. An excellent account of the different types of randomization procedures available for assigning treatments in clinical trials may be found in @rosenberger2016. As explained there, although applying the procedure of *complete randomization* could seem an obvious way to proceed, it suffers from an important drawback. Complete randomization can give rise to unbalanced treatment assignments, resulting in a loss of power of the tests applied. As a way of forcing assignment balance, *restricted randomization* procedures have been developed. The feature in common to all restricted randomization procedures is the probabilistic dependence of any assignments (excepting for the first one) on previous assignments. As a matter of fact, nowadays the majority of clinical trials resort to one of the different procedures of restricted randomization available to ensure balance in the treatment assignments. However, excessively restrictive procedures are exposed to selection bias. This happens, e.g., in unmasked trials with a *permuted block* design of fixed block size. As an extreme example, consider the case of a two-armed clinical trial with treatment ratio 1:1, in which the first *M* patients of a block of size *2M* are allocated to treatment 1; then, the last *M* allocations must be to treatment 2 and are entirely predictable by the researcher.

The *maximal procedure* (MP) of allocation was devised by @berger2003 as an extension of permuted block procedures in which the feasible sequences are those with imbalance not larger than a *maximum tolerated imbalance* (MTI), all of them being equiprobable. They proved that MP has less potential for selection bias than the *randomized block* procedure. Also, it compares favorably with *variable block* procedures under some likely scenarios.

# MPBoost

## Overview of the package 
@salama2008 proposed an efficient algorithm to generate allocation sequences by the maximal procedure of @berger2003. `MPBoost` is an `R` package that implements the algorithm of @salama2008. This algorithm proceeds through the construction of a directed graph, and so does its implementation in `MPBoost`, using to that end the functionality provided by the `Boost Graph Library` (BGL). BGL is itself a part of `Boost C++ Libraries` [@boost]. Although the recommended reference for BGL is the updated documentation in @boost, the reader unacquainted with BGL can find a useful guide in @siek2002.

MP may generate a huge reference set of feasible sequences [@berger2003; @salama2008]. In my implementation, in order to ensure that the probabilistic structure of the procedure is correctly reproduced, the number of sequences is computed by using multiprecision integer arithmetic. I have opted for the `Boost Multiprecision Library`, also a part of `Boost C++ Libraries` [@boost], taking more into account the easiness of its integration with `R` than any efficiency issues. Anyway, in spite of the huge computational burden introduced by this approach, sequences of length greater than any practical value (e.g., in the order of thousands) are very efficiently computed.

In the package, interfacing between `R` and `C++` code has been addressed with the aid of the `Rcpp` package [@rcpp].

## Using the package
The package consist of only one `R` function: `mpboost()`. Its arguments are `N1`, `N2`, and `MTI`. The arguments `N1` and `N2` specify the numbers allocated to the treatments 1 and 2, respectively. The MTI is set through the `MTI` argument, whose default value is 2. The value returned by `mpboost()` is an integer vector of N1 1's and N2 2', representing the sequence of treatments 1 and 2 allocated by the realization of the MP. The following code illustrates a typical call:

```{r}

MetaboLouise

Maintainer: Charlie Beirnaert <charlie_beirnaert@...>

vignettes/MetaboLouise_Intro.Rmd (Lines 42-57)

```{r params, dpi=dpi.HQ, fig.width=7, fig.height=5, out.width = figwidth.out}
    library(MetaboLouise)

    set.seed(7)
    ### General
    Nmetabos <- 20L
    Nrates <- 10L

    Network <- NetworkCreateR(N = Nmetabos)
    ```

The network has a distribution similar to those of biological (metabolomics) networks: 
Most nodes have few connections and only a few have many. 

In the image of the connection matrix underlying the network, plotted below, we can see which node are connected to which.
```{r network image, dpi=dpi.HQ, fig.width=7, fig.height=5, out.width = figwidth.out}

PPforest

Maintainer: Natalia da Silva <natalia@...>

vignettes/PPforest-vignette.Rmd (Lines 363-394)

```{r side, fig.align="center", fig.cap= capside, fig.show='hold',fig.width = 5 ,fig.height = 5, warning = FALSE, echo=FALSE}
side <-  function(ppf, ang = 0, lege = "bottom", siz = 3,
                  ttl = "") {
  voteinf <- data.frame(ids = 1:length(ppf$train[, 1]), Type = ppf$train[, 1],
                      ppf$votes, pred = ppf$prediction.oob ) %>%
  tidyr::gather(Class, Probability, -pred, -ids, -Type)

  ggplot2::ggplot(data = voteinf, ggplot2::aes(Class, Probability, color = Type)) +
    ggplot2::geom_jitter(height = 0, size = I(siz), alpha = .5) +
    ggtitle(ttl) +
    ylab("Proportion") +
    ggplot2::scale_colour_brewer(type = "qual", palette = "Dark2") +
    ggplot2::theme(legend.position = lege, legend.text = ggplot2::element_text(angle = ang)) +
    ggplot2::labs(colour = "Class")
}
capside <-"Vote matrix representation by a jittered side-by-side dotplot. Each dotplot shows the proportion of times the case was predicted into the group, with 1 indicating that the case was always predicted to the group and 0 being never."
 side(pprf.crab) 
 ``` 
 &nbsp;

 &nbsp;

 A ternary plot is a triangular diagram that shows the proportion of three variables that sum to a constant and is done using barycentric coordinates. Compositional data lies in a $(p-1)$-D simplex in $p$-space. 
 One advantage of ternary plot is that are good to visualize compositional data and the proportion of three variables in a two dimensional space can be shown. 
 When we have tree classes a ternary plot are well defined. With more than tree classes the ternary plot idea need to be generalized.@sutherland2000orca suggest the best approach to visualize compositional data will be to project the data into the $(p-1)-$D space (ternary diagram in $2-D$)  This will be the approach used to visualize the vote matrix information. 

 A ternary plot is a triangular diagram used to display compositional data with three components. More generally, compositional data can have any number of components, say $p$, and hence is contrained to a $(p-1)$-D simplex in $p$-space. The vote matrix is an example of compositional data, with $G$ components. 
&nbsp;

&nbsp;

```{r ternary, fig.align = "center",fig.cap = capter, fig.show = 'hold',fig.width = 7 ,fig.height = 4, warning = FALSE, echo=FALSE}

QuantumClone

Maintainer: Paul Deveau <quantumclone.package@...>

vignettes/Use_case.Rmd (Lines 141-148)

`````{r evol, echo = FALSE, warning = FALSE}
QuantumClone::evolution_plot(QuantumClone::QC_output,Sample_names = c("Timepoint_1","Timepoint_2"))

````

#### Recreate phylogenetic tree (when possible)
`````{r Tree, echo = TRUE, warning = FALSE,eval=TRUE}

RDFTensor

Maintainer: Abdelmoneim Amer Desouki <desouki@...>

vignettes/RDFTensor-Demo.Rmd (Lines 61-90)

```{r, echo=TRUE}
    print(sprintf('True positive rate:%.2f %%',100*sum(RecRes$TP)/length(RecRes$TP)))
    s=2#<affects> predicate 
    stats=NULL
    ijk=RecRes[[1]]$ijk
    val=RecRes[[1]]$val
    tp_flg=RecRes$TP

    for(thr in sort(unique(val[tp_flg&ijk[,2]==s]),decreasing=TRUE)){
        tp=sum(tp_flg[val>=thr & ijk[,2]==s])
        fp=sum(val>=thr & ijk[,2]==s)-tp
        fn=sum(ntnsr$X[[s]])-tp
        stats=rbind(stats,cbind(thr=thr,R=tp/(tp+fn),P=tp/(tp+fp),tp=tp,fn=fn,fp=fp))
    }
    HM=apply(stats,1,function(x){2/(1/x['P']+1/x['R'])})

     plot(stats[,'thr'],stats[,'R']*100,type='l',col='red',lwd=2,
    main=sprintf('Slice:%d, Predicate:<%s>, #Triples:%d, Max HM @ %.4f',s,ntnsr$P[s],sum(ntnsr$X[[s]]),
     stats[which.max(HM),'thr']), ylab="",xlab='Threshold ',cex.main=0.85,
                     xlim=c(0,max(thr,1)),ylim=c(0,100))
    abline(h = c(0,20,40,60,80,100), lty = 2, col = "grey")
    abline(v = seq(0.1,1,0.1),  lty = 2, col = "grey")
    lines(stats[,'thr'],stats[,'P']*100,col='blue',lwd=2)
    lines(stats[,'thr'],100*HM,col='green',lwd=2)
    # grid(nx=10, lty = "dotted", lwd = 1)
    legend(legend=c('Recall','Precision','Harmonic mean'),col=c('red','blue','green'),x=0.6,y=20,pch=1,cex=0.75,lwd=2)
    abline(v=stats[which.max(HM),'thr'],col='grey')
 ```

NA

SEMsens

Maintainer: Walter Leite <walter.leite@...>

vignettes/Smith19_6phantom3.Rmd (Lines 33-43)

``````{r, message=FALSE, warning=FALSE}
# Load lavaan and SEMsens packages
require(lavaan)
require(SEMsens)
```

## Step 1 : Load data with names and indicate categorical variables

Smith-Adcock et. al (2019) shared the raw data and codebook, so we load the data with their names, and create names for categorical variables. 

```{r}

TriDimRegression

Maintainer: Alexander (Sasha) Pastukhov <pastukhov.alexander@...>

vignettes/transformation_matrices.Rmd (Lines 12-189)

```{r setup, include = FALSE}
knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>"
)
````

For most transformation, we assume that we can compute only the translation coefficients ($a_i$). The only exception are Euclidean transformation around a _single_ axis of rotation that allow to compute a single scaling and a single rotation coefficient. In all other cases, values of computed coefficients would depend on the assumed order of individual transformation, making them no more than a potentially misleading guesses.

## Bidimensional regression

### Translation

Number of parameters: 2

* translation: $a_1$, $a_2$

$$
\begin{bmatrix}
 1 & 0 & a_1 \\
 0 & 1 & a_2 \\
 0 & 0 & 1
\end{bmatrix}
$$

### Euclidean

Number of parameters: 4

* translation: $a_1$, $a_2$
* scaling: $\phi$
* rotation: $\theta$

$$
\begin{bmatrix}
 b_1 & b_2 & a_1 \\
-b_2 & b_1 & a_2 \\
 0   & 0   & 1
\end{bmatrix}
$$

The Euclidean transformation is a special case, where we can compute rotation ($\theta$) and the single scaling ($\phi$) coefficients, as follows:
$$
\phi = \sqrt{b_1^2 + b_2^2}\\
\theta = tan^{-1}(\frac{b_2}{b_1})
$$

### Affine

Number of parameters: 6

* translation: $a_1$, $a_2$
* scaling · rotation · sheer: $b_1$, $b_2$, $b_3$, $b_4$

$$
\begin{bmatrix}
 b_1 & b_2 & a_1 \\
 b_3 & b_4 & a_2 \\
 0   & 0   & 1
\end{bmatrix}
$$

### Projective

Number of parameters: 8

* translation: $a_1$, $a_2$
* scaling · rotation · sheer · projection: $b_1$...$b_6$

$$
\begin{bmatrix}
 b_1 & b_2 & a_1 \\
 b_3 & b_4 & a_2 \\
 b_5 & b_6 & 1
\end{bmatrix}
$$

## Tridimensional regression

### Translation

Number of parameters: 3

* translation: $a_1$, $a_2$, $a_3$

$$
\begin{bmatrix}
 1 & 0 & 0 & a_1 \\
 0 & 1 & 0 & a_2 \\
 0 & 0 & 1 & a_3 \\
 0 & 0 & 0 & 1
\end{bmatrix}
$$

### Euclidean

Number of parameters: 5

* translation: $a_1$, $a_2$, $a_3$
* scaling: $\phi$
* rotation: $\theta$

For all Euclidean rotations, we opted to use coefficient $b_3$ to code scaling ($\phi$), whereas $b_2 = sin(\theta)$ and $b_1=\phi~  cos(\theta)$. The coefficients are computed as follows:
$$
\phi = \sqrt{b_1^2 + b_2^2}\\
\theta = tan^{-1}(\frac{b_2}{b_1})
$$

#### Euclidean, rotation about x axis

Note that during fitting $\phi$ is computed from $b_1$ and $b_2$ on the fly.
$$
\begin{bmatrix}
 \phi & 0   & 0   & a_1 \\
 0    & b_1 &-b_2 & a_2 \\
 0    & b_2 & b_1 & a_3 \\
 0    & 0   & 0   & 1
\end{bmatrix}
$$

#### Euclidean, rotation about y axis

$$
\begin{bmatrix}
 b_1 & 0    & b_2 & a_1 \\
 0   & \phi & 0   & a_2 \\
-b_2 & 0    & b_1 & a_3 \\
 0   & 0    &  0   & 1
\end{bmatrix}
$$

#### Euclidean, rotation about z axis

$$
\begin{bmatrix}
 b_1 &-b_2 & 0    & a_1 \\
 b_2 & b_1 & 0    & a_2 \\
 0   & 0   & \phi & a_3 \\
 0   & 0   &  0   & 1
\end{bmatrix}
$$

### Affine

Number of parameters: 12

* translation: $a_1$, $a_2$,  $a_3$
* scaling · rotation · sheer: $b_1$...$b_9$

$$
\begin{bmatrix}
 b_1 & b_2 & b_3 & a_1 \\
 b_4 & b_5 & b_6 & a_2 \\
 b_7 & b_8 & b_9 & a_3 \\
 0   & 0   &  0   & 1
\end{bmatrix}
$$

### Projective

Number of parameters: 15

* translation: $a_1$, $a_2$,  $a_3$
* scaling · rotation · sheer · projection: $b_1$...$b_12$

$$
\begin{bmatrix}
 b_1    & b_2    & b_3    & a_1 \\
 b_4    & b_5    & b_6    & a_2 \\
 b_7    & b_8    & b_9    & a_3 \\
 b_{10} & b_{11} & b_{12} & 1
\end{bmatrix}
$$

NA

UncertainInterval

Maintainer: Hans Landsheer <j.a.landsheer@...>

vignettes/UI-vignette.Rmd (Lines 202-216)

```{r fig.cap="Figure 6"}

plotMD(psa2b$d, t_tpsa, model='binormal', position.legend = 'topleft')
(res1=ui.binormal(psa2b$d, t_tpsa))
abline(v=res1$solution, col= 'red')

invBoxCox <- function(x, lambda)
  if (lambda == 0) exp(x) else (lambda*x + 1)^(1/lambda)
invBoxCox(res1$solution, p1$roundlam)

 ```
Of course, not all problems have disappeared; outliers are still there. The boxplot in figure 7 shows that the outliers in the lower tail are problematic: low scores < -3 indicate both a patient and a non-patient. These extreme scores in the sample also influence the estimates of the bi-normal distributions somewhat.

```{r fig.cap="Figure 7"}

VFP

Maintainer: Andre Schuetzenmeister <andre.schuetzenmeister@...>

vignettes/VFP_package_vignette.rmd (Lines 187-202)

```{r fit_VFP_model_3, echo=TRUE}
 tot.all <- fit.vfp(mat.total, 1:10)
 # 'summary' presents more details for multi-model objects 
 summary(tot.all)
 ```

# Plotting Precision Profiles

Our recommendation is to always fit all models and using the best one. Of course, if two fitted models are very
similar in their AIC one can use the less complex model. The fitted model(s) is/are stored in a *VFP*-object. 
Calling the plot-method for these objects will always generate a precision profile on variance scale (VC),
which is the scale model fitting takes place on.

 ```{r plot_VFP_model_VC, echo=TRUE}

Maintainer: Andre Schuetzenmeister <andre.schuetzenmeister@...>

vignettes/VFP_package_vignette.rmd (Lines 87-97)

 ```{r perform_VCA_1, echo=TRUE}
library(VFP)
library(VCA)
# CLSI EP05-A3 example data
data(CA19_9)
```  

Now perform the variance component analysis (VCA) for each sample using batch-processing. Here, we the *anovaVCA*
function from R-package *VCA* is used.

 ```{r perform_VCA_2, echo=TRUE}

boostr

Maintainer: Steven Pollack <steven@...>

vignettes/boostr_user_inputs.Rmd (Lines 121-136)

````{r, arcx4AndkNN, cache=FALSE}
boostr::boostWithArcX4(x = kNN_EstimationProcedure,
                       B = 3,
                       data = Glass,
                       metadata = list(learningSet="learningSet"),
                       .procArgs = list(k=5),
                       .boostBackendArgs = list(
                         .subsetFormula=formula(Type~.))
                       ) 
```
</div>

Estimation procedures like `svm_EstimationProcedure` <a href="#svmExample">above</a>, are so common in `R`, `boostr` implements a Wrapper Generator, `boostr::buildEstimationProcedure` explicitly for this design-pattern. Hence you can skip passing a function to the `x` argument of `boostr::boost` and just pass in a list of the form `list(train=someFun, predict=someOtherFun)`. If you do this, the structure of the `.procArgs` argument changes to a list of lists. See <a href="#arcx4AndSvm">this example</a> where an svm is boosted according to arc-x4, and the list-style argument to `x` is used. Note, the structure of `.procArgs` is now `list(.trainArgs=list(...), .predictArgs=list(...))` where `.trainArgs` are named arguments to pass to the `train` component of `x` and `.predictArgs` are the named components to pass to the `predict` component of `x`. See the help documention for `boostr::buildEstimationProcedure` for more information.

<div id="arcx4AndSvm">
```{r, arcx4AndSvm, cache=FALSE}

cjbart

Maintainer: Thomas Robinson <ts.robinson1994@...>

vignettes/cjbart-demo.Rmd (Lines 77-83)

````{r summary}
summary(het_effects)
```

We can plot the IMCEs, color coding the points by some covariate value, using the in-built `plot()` function:

```{r plot_imces}

coRanking

Maintainer: Guido Kraemer <gkraemer@...>

vignettes/coranking.Rmd (Lines 37-44)

```{r, fig.width = 10, fig.height = 10, out.width = "95%"}
scatterplot3d(data$x, data$y, data$z,
              xlab = "x", ylab = "y", zlab = "z",
              color = data$col)
 ```

Dimensionality reductions:
```{r, fig.show="hold", fig.width = 7, fig.height = 7, out.width = "45%"}

fivethirtyeight

Maintainer: Albert Y. Kim <albert.ys.kim@...>

vignettes/tame.Rmd (Lines 448-458)

```{r, fig.width=16/2.5,  fig.height=9/2}
library(tidyr)
drinks_tidy_US_FR <- drinks %>%
  filter(country %in% c("USA", "France")) %>% 
  gather(type, servings, -c(country, total_litres_of_pure_alcohol))
drinks_tidy_US_FR
````

This formatting of the data now allows itself to be used as input to the `ggplot()` function to create an appropriate barplot in Figure \ref{fig:drinks}. Note in this case since the number of servings is pretabulated in the variable `servings`, which in turn is mapped to the y-axis, we use `geom_col()` instead of `geom_bar()` (`geom_col()` is equivalent to `geom_bar(stat = "identity")`).

```{r drinks, fig.width=16/2.5, fig.height=9/2.5, fig.align='center', fig.cap="USA vs France alcohol consumption."}

geeCRT

Maintainer: Hengshi Yu <hengshi@...>

vignettes/geeCRT.Rmd (Lines 79-134)

```{r geemaee_small, fig.keep="all", fig.width = 7, fig.height=4}

sampleSWCRT = sampleSWCRTSmall

### Individual-level id, period, outcome, and design matrix
id = sampleSWCRT$id; period = sampleSWCRT$period;
X = as.matrix(sampleSWCRT[, c('period1', 'period2', 'period3', 'period4', 'treatment')])
m = as.matrix(table(id, period)); n = dim(m)[1]; t = dim(m)[2]

### design matrix for correlation parameters
Z = createzCrossSec(m) 

### (1) Matrix-adjusted estimating equations and GEE 
### on continous outcome with nested exchangeable correlation structure

### MAEE
est_maee_ind_con = geemaee(y = sampleSWCRT$y_con, 
                           X = X, id  = id, Z = Z, 
                           family = "continuous", 
                           maxiter = 500, epsilon = 0.001, 
                           printrange = TRUE, alpadj = TRUE, 
                           shrink = "ALPHA", makevone = FALSE)

### GEE
est_uee_ind_con = geemaee(y = sampleSWCRT$y_con, 
                          X = X, id = id, Z = Z, 
                          family = "continuous", 
                          maxiter = 500, epsilon = 0.001, 
                          printrange = TRUE, alpadj = FALSE, 
                          shrink = "ALPHA", makevone = FALSE)

### (2) Matrix-adjusted estimating equations and GEE 
### on binary outcome with nested exchangeable correlation structure

### MAEE
est_maee_ind_bin = geemaee(y = sampleSWCRT$y_bin, 
                           X = X, id = id, Z = Z, 
                           family = "binomial", 
                           maxiter = 500, epsilon = 0.001, 
                           printrange = TRUE, alpadj = TRUE, 
                           shrink = "ALPHA", makevone = FALSE)
print(est_maee_ind_bin)

### GEE
est_uee_ind_bin = geemaee(y = sampleSWCRT$y_bin, 
                          X = X, id = id, Z = Z, 
                          family = "binomial", 
                          maxiter = 500, epsilon = 0.001, 
                          printrange = TRUE, alpadj = FALSE, 
                          shrink = "ALPHA", makevone = FALSE)

 ```

Then we have the following output: 
```{r set-options1, echo=FALSE, fig.keep="all", fig.width = 7, fig.height=4}

gen3sis

Maintainer: Oskar Hagen <oskar@...>

vignettes/create_config.Rmd (Lines 249-257)

>```{r eval=T, fig.width=6, fig.height=3.2}
n <- 100
hist(rweibull(n, shape = 1.5, scale = 133), col="black")
```

### ***Speciation***
The speciation iterates over every species separately, registers populations’ geographic occupancy (species range), and determines when geographic isolation between population clusters is higher than a user-defined threshold, triggering a lineage splitting event of cladogenesis. The clustering of occupied sites is based on the species’ dispersal capacity and the landscape connection costs. Over time, disconnected clusters gradually accumulate incompatibility, analogous to genetic differentiation. When the divergence between clusters is above the speciation threshold, those clusters become two or more distinct species, and a divergence matrix reset follows. On the other hand, if geographic clusters come into secondary contact before the speciation occurs, they coalesce and incompatibilities are gradually reduced to zero. In our example, speciation takes place after 2 time-steps of isolation and the divergence increase is the same for all species as indicated by *get_divergence_threshold*. Since our landscape consists of 1 myr time-steps, these 2 time-steps correspond to a span of 2 myr.

>```{r eval=FALSE}

gfmR

Maintainer: Brad Price <brad.price@...>

vignettes/gfmr.Rmd (Lines 117-123)

```{r}
mod
````

Finally we see the results of the tuning parameter selection with 5 groups.  We see the combination of
the Independent republican, democrat and independents.
NA

logger

Maintainer: Gergely Daróczi <daroczig@...>

vignettes/customize_logger.Rmd (Lines 225-231)

```{r}
log_layout()
````

For more details on this, see the [Writing custom logger extensions](https://daroczig.github.io/logger/articles/write_custom_extensions.html) vignette.

```{r}

meshed

Maintainer: Michele Peruzzi <michele.peruzzi@...>

vignettes/univariate_gridded.Rmd (Lines 87-106)

````{r}
mcmc_keep <- 1000
mcmc_burn <- 1000
mcmc_thin <- 2

mesh_total_time <- system.time({
  meshout <- spmeshed(y, X, coords,
                      #axis_partition=c(10,10), #same as block_size=25
                      block_size = 25,
                      n_samples = mcmc_keep, n_burn = mcmc_burn, n_thin = mcmc_thin, 
                      n_threads = 4,
                      verbose = 0,
                      prior=list(phi=c(1,30))
  )})
```
We can now do some postprocessing of the results. We extract posterior marginal summaries for $\sigma^2$, $\phi$, $\tau^2$, and $\beta_2$. The model that `spmeshed` targets is a slight reparametrization of the above:^[At its core, `spmeshed` implements the spatial factor model $Y(s) = X(s)\beta + \Lambda v(s) + \epsilon(s)$ where $w(s) = \Lambda v(s)$ is modeled via linear coregionalization.]
$$ y = X \beta + \lambda w + \epsilon, $$
where $w\sim MGP$ has unitary variance. This model is equivalent to the previous one and in fact we find $\sigma^2=\lambda^2$. 

```{r}

Maintainer: Michele Peruzzi <michele.peruzzi@...>

vignettes/univariate_irregular_poisson.Rmd (Lines 86-106)

````{r}
mcmc_keep <- 1000
mcmc_burn <- 1000
mcmc_thin <- 1

mesh_total_time <- system.time({
  meshout <- spmeshed(y, X, coords,
                      family="poisson",
                      grid_size=c(40, 40),
                      block_size = 20,
                      n_samples = mcmc_keep, n_burn = mcmc_burn, n_thin = mcmc_thin, 
                      n_threads = 4,
                      verbose = 0,
                      prior=list(phi=c(1,30))
  )})
```
We can now do some postprocessing of the results. We extract posterior marginal summaries for $\sigma^2$, $\phi$, $\tau^2$, and $\beta_2$. The model that `spmeshed` targets is a slight reparametrization of the above:^[At its core, `spmeshed` implements the spatial factor model $Y(s) ~ Poisson( exp(X(s)\beta + \Lambda v(s)) )$ where $w(s) = \Lambda v(s)$ is modeled via linear coregionalization.]
$$ log(\eta) = X \beta + \lambda w, $$
where $w\sim MGP$ has unitary variance. This model is equivalent to the previous one and in fact we find $\sigma^2=\lambda^2$. Naturally, it is much more difficult to estimate parameters when data are counts.

```{r}

Maintainer: Michele Peruzzi <michele.peruzzi@...>

vignettes/univariate_gridded.Rmd (Lines 87-102)

````{r}
mcmc_keep <- 1000
mcmc_burn <- 1000
mcmc_thin <- 2

mesh_total_time <- system.time({
  meshout <- spmeshed(y, X, coords,
                      #axis_partition=c(10,10), #same as block_size=25
                      block_size = 25,
                      n_samples = mcmc_keep, n_burn = mcmc_burn, n_thin = mcmc_thin, 
                      n_threads = 4,
                      verbose = 0,
                      prior=list(phi=c(1,30))
  )})
```
We can now do some postprocessing of the results. We extract posterior marginal summaries for $\sigma^2$, $\phi$, $\tau^2$, and $\beta_2$. The model that `spmeshed` targets is a slight reparametrization of the above:^[At its core, `spmeshed` implements the spatial factor model $Y(s) = X(s)\beta + \Lambda v(s) + \epsilon(s)$ where $w(s) = \Lambda v(s)$ is modeled via linear coregionalization.]

mitml

Maintainer: Simon Grund <grund@...>

vignettes/Analysis.Rmd (Lines 108-126)

```{r}
fit <- with(implist, {
  lmer(MathAchiev ~ 1 + Sex + I.SES + G.SES + (1|ID))
})
```

This results in a list of fitted models, one for each of the imputed data sets.

## Pooling

The results obtained from the imputed data sets must be pooled in order to obtain a set of final parameter estimates and inferences.
In the following, we employ a number of different pooling methods that can be used to address common statistical tasks, for example, for (a) estimating and testing individual parameters, (b) model comparisons, and (c) tests of constraints about one or several parameters.

#### Parameter estimates

Individual parameters are commonly pooled with the rules developed by Rubin (1987).
In `mitml`, Rubin's rules are implemented in the `testEstimates` function.

```{r}

rSPDE

Maintainer: David Bolin <davidbolin@...>

vignettes/rspde.Rmd (Lines 295-302)

```{r}
mlik <- function(theta, Y, G, C, A) {
  return(-spde.matern.loglike(exp(theta[1]), exp(theta[2]), exp(theta[3]), exp(theta[4]),
                              Y = Y, G = G, C = C, A = A, d = 2, m=1))
}
```
We can now estimate the parameter using `optim`.
```{r, eval = run_inla}

rnetcarto

Maintainer: Guilhem Doulcier <guilhem.doulcier@...>

vignettes/getting-started.Rmd (Lines 145-168)

``` {r, echo=TRUE}
    input = matrix(0,6,2)
    input[1,1] = 1
    input[2,1] = 1
    input[3,1] = 1
    input[4,2] = 1
    input[5,2] = 1
    input[6,2] = 1
    rownames(input) = c("A","B","C","D","E","F")
    colnames(input) = c("Team 1", "Team 2")
    print(input)
    ```

## List format
If you choose the **list format**, your network must be formatted as a
R-list. The first element must be a vector giving the label. The third
element is a vector of the edge weights. The weights are optional and
are all set to one if the list contains only the first two
elements.

### Example 1: Unweighted network:

``` {r, echo=TRUE}

sitree

Maintainer: Clara Anton Fernandez <caf@...>

vignettes/TestEquations.Rmd (Lines 78-168)

```{r}  

library(sitree)
res <- sitree (tree.df   = tr,
                 stand.df  = fl,
                 functions = list(
                     fn.growth     = 'grow.dbhinc.hgtinc',
                     fn.mort       = 'mort.B2007',
                     fn.recr       = 'recr.BBG2008',
                     fn.management = 'management.prob',
                     fn.tree.removal = 'mng.tree.removal',
                     fn.modif      = NULL, 
                     fn.prep.common.vars = 'prep.common.vars.fun'
                 ),
                 n.periods = 5,
                 period.length = 5,
                 mng.options = NA,
                 print.comments = FALSE,
                 fn.dbh.inc = "dbhi.BN2009",
                 fn.hgt.inc =  "height.korf", 
                 fun.final.felling = "harv.prob",
                 fun.thinning      = "thin.prob",
                 per.vol.harv = 0.83
                 )

 ## getTrees(i, j)  -- obtains the information of the i trees, on the j periods,
 ## by default it selects all. It does not display it, it passes the value.
 ## It returns a list with elements plot.id, treeid, dbh.mm, height.dm, yrs.sim,
 ## tree.sp

 get.some.trees <- res$live$getTrees(1:3, 2:5)

 ## extractTrees(i)  -- extracts the i trees, it removed the trees from the
 ## original object and it passes the information. It returns a list.

 dead <- res$live$extractTrees(4:7)

 ## addTrees(x) -- x should be a list

 res$live$addTrees(dead)

 ## last.time.alive. It checks when was the last DBH measured.
 new.dead.trees <- trListDead$new(
      data = dead,
      last.measurement = cbind(
        do.call("dead.trees.growth"
              , args = list(
                  dt     = dead,
                  growth = data.frame(dbh.inc.mm     = rep(3, 4),
                                      hgt.inc.dm  = rep(8, 4)),
                  mort   = rep(TRUE, 4),
                  this.period = "t2")
                ),
        found.dead = "t3"
      ),
      nperiods = res$live$nperiods
      )

 lta <- new.dead.trees$last.time.alive()

 ## which in this case differs from the data stored under the last.measurement
 ## field because we have defined it artificially above as "t3"
 lta
 new.dead.trees$last.measurement$found.dead
 ## But we can remove the data from the periods after it was found dead
 new.dead.trees$remove.next.period("t3")
 new.dead.trees$remove.next.period("t4")
 new.dead.trees$remove.next.period("t5")
 ## and now results do match
 lta <- new.dead.trees$last.time.alive()
 ## last time it was alive was in "t2"
 lta
 ## ant it was found dead in "t3"
 new.dead.trees$last.measurement$found.dead

  ```

# Taking a look at the results of the simulation

 We first use the example functions available in the sitree package
 to run a 15 periods simulation.

```{r}  

superb

Maintainer: Denis Cousineau <denis.cousineau@...>

vignettes/TheMakingOf.Rmd (Lines 154-169)

just a different name for the variable on the horizontal axis: 

```{r, message=FALSE, echo=TRUE, eval=TRUE}
ornateWS <- list(
    xlab("Moment"),                                                #different!
    scale_x_discrete(labels=c("Pre\ntreatment", "Post\ntreatment")), 
    ylab("Statistics understanding"),
    coord_cartesian( ylim = c(75,125) ),
    geom_hline(yintercept = 100, colour = "black", size = 0.5, linetype=2),
    theme_light(base_size = 16) +
    theme( plot.subtitle = element_text(size=12))
)
```

The difference in the present example is that the data are from a within-subject design
with two repeated measures. The dataset must be in a wide format, e.g., 

Maintainer: Denis Cousineau <denis.cousineau@...>

vignettes/VignetteA.Rmd (Lines 208-220)

        upperRefUpperLimit = upperRefLimit + limitZ * se;

        shap_normalcy = shapiro.test(data);
        shap_output = paste(c("Shapiro-Wilk: W = ", format(shap_normalcy$statistic,
                            digits = 6), ", p-value = ", format(shap_normalcy$p.value,
                            digits = 6)), collapse = "");
        ks_normalcy = suppressWarnings(ks.test(data, "pnorm", m = mean, sd = sd));
        ks_output = paste(c("Kolmorgorov-Smirnov: D = ", format(ks_normalcy$statistic,
                            digits = 6), ", p-value = ", format(ks_normalcy$p.value,
                            digits = 6)), collapse = "");
        if(shap_normalcy$p.value < 0.05 | ks_normalcy$p.value < 0.05){
            norm = list(shap_output, ks_output);

targeted

Maintainer: Klaus K. Holst <klaus@...>

vignettes/riskregression.Rmd (Lines 308-316)

```{r}
head(iid(fit))
````

SessionInfo
============

```{r}

valueEQ5D

Maintainer: Sheeja Manchira Krishnan <sheejamk@...>

vignettes/User_Guide.Rmd (Lines 187-196)

````{r }
value_5L(EQ5D5Ldata, "eq5d5L.q1", "eq5d5L.q2", "eq5d5L.q3", "eq5d5L.q4", "eq5d5L.q5", "England", NULL, NULL)
value_5L(EQ5D5Ldata, "eq5d5L.q1", "eq5d5L.q2", "eq5d5L.q3", "eq5d5L.q4", "eq5d5L.q5", "England", "male", c(10, 70))
value_5L(EQ5D5Ldata, "eq5d5L.q1", "eq5d5L.q2", "eq5d5L.q3", "eq5d5L.q4", "eq5d5L.q5", "Indonesia", "male", NULL)
value_5L(EQ5D5Ldata, "eq5d5L.q1", "eq5d5L.q2", "eq5d5L.q3", "eq5d5L.q4", "eq5d5L.q5", "Ireland", NULL, c(10, 70))
```

## Examples- Mapping EQ-5D-5L scores to EQ-5D-3L index values for UK and other countries
Each of the below calls will give same EQ-5d-3L index values while valuing the EQ-5D-5L individual score 1, 2, 3, 4, 5 for mobility, self care, social activity, pain and discomfort, and anxiety respectively.
```{r }

ztpln

Maintainer: Masatoshi Katabuchi <mattocci27@...>

vignettes/ztpln.rmd (Lines 101-114)

````{r, eval = T}
library(dplyr)
library(tidyr)
library(ggplot2)
set.seed(123)

rztpln(n = 10, mu = 0, sig = 1)

rztpln(n = 10, mu = 6, sig = 4)
```

We can also generate mixture of ZTPLN random variates.

````{r, eval = T}

abbyyR

Maintainer: Gaurav Sood <gsood07@...>

vignettes/overview.Rmd (lines 81-83):

    ```{r, deleteTask, eval=FALSE}
    deleteTask(taskId="task_id")
    ```

aGE

Maintainer: Tianzhong Yang <tianzhong.yang@...>

vignettes/About.Rmd (lines 39-41):

```{r, eval=F} 
 setwd('local folder')
 ```  

ANOVAreplication

Maintainer: M.A.J. Zondervan-Zwijnenburg <m.a.j.zwijnenburg@...>

vignettes/vignette_ANOVAreplication.Rmd (lines 19-21):

  ```{r library, echo=TRUE, message=FALSE, warning=FALSE}
library(ANOVAreplication)
```

bytescircle

Maintainer: Roberto S. Galende <roberto.s.galende@...>

vignettes/bytescircle.Rmd (lines 189-191):

``````{r eval=FALSE}
bytescircle( "bytecircle.Rmd", plot=1, ascii=TRUE )
```

CNAIM

Maintainer: Emil Larsen <mohsin@...>

vignettes/cnaim.Rmd (lines 38-40):

  ```{r echo=F, message=F, class.source='highlight',comment=""}
library(CNAIM)
```

coffee

Maintainer: Maarten Blaauw <maarten.blaauw@...>

vignettes/intro.Rmd (lines 52-55):

```{r, eval=FALSE, fig.width=4, fig.asp=1.3}
sim.strat()
strat(burnin=0, thinning=1, its=2000, init.ages=seq(3000, 4000, length=5))
````

CTP

Maintainer: Paul Jordan <paul.jordan@...>

vignettes/closed_testing_procedure.Rmd (lines 244-249):

 ```{r}
 library(survival)
        data(ovarian)

        print(survdiff(Surv(futime,fustat)~rx, data=ovarian))
```

ecmwfr

Maintainer: Koen Hufkens <koen.hufkens@...>

vignettes/advanced_vignette.Rmd (lines 54-69):

```{r eval = FALSE}
list(stream  = "oper",
     levtype = "sfc",
     param   = "167.128",
     dataset = "interim",
     step    = "0",
     grid    = "0.75/0.75",
     time    = "00",
     date    = "2014-07-01/to/2014-07-02",
     type    = "an",
     class   = "ei",
     area    = "73.5/-27/33/45",
     format  = "netcdf",
     target  = "tmp.nc") %>%
  wf_request(user = user, path = "~")
````

EGRET

Maintainer: Laura DeCicco <ldecicco@...>

vignettes/EGRET.Rmd (lines 801-803):

  ```{r multiPlotDataOverview, echo=TRUE,out.width="100%", fig.cap="`multiPlotDataOverview(eList, qUnit=1)`"}
multiPlotDataOverview(eList, qUnit=1)
```

ensembleR

Maintainer: Saurav Kaushik <sauravkaushik8@...>

vignettes/Introduction_to_ensembleR.Rmd (lines 18-20):

  ```{r,eval=FALSE}
install.packages("ensembleR", dependencies = c("Imports", "Suggests"))
```

FSinR

Maintainer: Alfonso Jiménez-Vílchez <i52jivia@...>

vignettes/methods.Rmd (lines 12-17):

  ```{r setup, include = FALSE}
knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>"
)
```

gargle

Maintainer: Jennifer Bryan <jenny@...>

vignettes/gargle-auth-in-client-package.Rmd (lines 246-256):

```{r eval = FALSE}
library(googledrive)
library(googlesheets4)

drive_auth(email = "jane_doe@example.com") # gets a suitably scoped token
                                           # and stashes for googledrive use

sheets_auth(token = drive_token())         # registers token with googlesheets4

# now work with both packages freely ...
````

junctions

Maintainer: Thijs Janzen <thijsjanzen@...>

vignettes/junctions_vignette.Rmd (lines 13-16):

  ```{r setup, include=FALSE}
library(junctions)
knitr::opts_chunk$set(echo = TRUE)
```

junctions

Maintainer: Thijs Janzen <thijsjanzen@...>

vignettes/phased_and_unphased_data.Rmd (lines 13-17):

  ```{r setup, include=FALSE}
library(junctions)
library(Rcpp)
knitr::opts_chunk$set(fig.width = 7, echo = TRUE)
```

leiden

Maintainer: S. Thomas Kelly <tomkellygenetics@...>

vignettes/benchmarking.Rmd (lines 151-155):

````{python, eval=module}
partition = la.find_partition(G, la.CPMVertexPartition, resolution_parameter = 0.05)
print(partition)
partition
```

liger

Maintainer: Jean Fan <jeanfan@...>

vignettes/gsea.Rmd (lines 85-87):

```{r, echo=TRUE}
sessionInfo()
````

liger

Maintainer: Jean Fan <jeanfan@...>

vignettes/permpvals.Rmd (lines 67-69):

```{r, echo=TRUE}
sessionInfo()
````

liger

Maintainer: Jean Fan <jeanfan@...>

vignettes/simulation.Rmd (lines 91-93):

```{r, echo=TRUE}
sessionInfo()
````

link2GI

Maintainer: Chris Reudenbach <reudenbach@...>

vignettes/link2GI2.Rmd (lines 122-135):

```{r, eval=FALSE}
 require(link2GI)
 require(sf)

 # get  data
 nc <- st_read(system.file("shape/nc.shp", package="sf"))

 # Automatic search and find of GRASS binaries
 # using the nc sf data object for spatial referencing
 # This is the highly recommended linking procedure for on the fly jobs
 # NOTE: if more than one GRASS installation is found the highest version will be choosed

 grass<-linkGRASS7(nc,returnPaths = TRUE)
 ```

microsamplingDesign

Maintainer: Adriaan Blommaert <adriaan.blommaert@...>

vignettes/microsamplingDesign.Rmd (lines 163-165):

 ```{r constructModel, echo = FALSE , fig.cap  = "Construct a PK model" , out.width = "500px" , fig.align = "center" }
knitr::include_graphics( "appPictures/modifyParameters.png" )
```

mmpf

Maintainer: Zachary Jones <zmj@...>

vignettes/mmpf.Rmd (lines 44-49):

```{r, fig.width = 7, fig.height = 5}
mp = marginalPrediction(swiss[, -1], "Education", c(10, 5), fit, aggregate.fun = identity)
mp

ggplot(melt(data.frame(mp), id.vars = "Education"), aes(Education, value, group = variable)) + geom_point() + geom_line()
````

msde

Maintainer: Martin Lysy <mlysy@...>

vignettes/msde-quicktut.Rmd (lines 81-87):

    ```{r, eval = FALSE}
sde.drift <- function(x, theta) {
  dr <- c(theta[1]*x[1] - theta[2]*x[1]*x[2], # alpha * H - beta * H*L
          theta[2]*x[1]*x[2] - theta[3]*x[2]) # beta * H*L - gamma * L
  dr
}
```

mully

Maintainer: Zaynab Hammoud <zaynabhassanhammoud@...>

vignettes/mully-pdf.Rmd (lines 176-179):

````{r,eval=FALSE}
g=mully::demo()
addEdge(g,"dr3","g2",attributes=list(name="newEdge"))
```

mully

Maintainer: Zaynab Hammoud <zaynabhassanhammoud@...>

vignettes/mully-vignette.Rmd (lines 176-179):

````{r,eval=FALSE}
g=mully::demo()
addEdge(g,"dr3","g2",attributes=list(name="newEdge"))
```

nsrr

Maintainer: John Muschelli <muschellij2@...>

vignettes/dictionary.Rmd (lines 10-15):

  ```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```

ph2rand

Maintainer: Michael Grayling <michael.grayling@...>

vignettes/ph2rand.Rmd (lines 804-806):

  ```{r eg52_1, cache = TRUE}
citation("ph2rand")
```

pomdp

Maintainer: Michael Hahsler <mhahsler@...>

vignettes/POMDP.Rmd (lines 236-246):

 ```{r, eval = FALSE}
 reward = list(
   "action1" = list(
      "state1" = matrix(c(1, 2, 3, 4, 5, 6) , nrow = 3 , byrow = TRUE), 
      "state2" = matrix(c(3, 4, 5, 2, 3, 7) , nrow = 3 , byrow = TRUE), 
      "state3" = matrix(c(6, 4, 8, 2, 9, 4) , nrow = 3 , byrow = TRUE)), 
   "action2" = list(
      "state1" = matrix(c(3, 2, 4, 7, 4, 8) , nrow = 3 , byrow = TRUE), 
      "state2" = matrix(c(0, 9, 8, 2, 5, 4) , nrow = 3 , byrow = TRUE), 
      "state3" = matrix(c(4, 3, 4, 4, 5, 6) , nrow = 3 , byrow = TRUE)))
  ```

quantregGrowth

Maintainer: Vito M. R. Muggeo <vito.muggeo@...>

vignettes/quantregGrowth.Rmd (lines 51-53):

 ```{r}
charts(o, k=c(10,10.5,11,16,17)) #the quantile at the specified k values
```

radsafer

Maintainer: Mark Hogue <mark.hogue.chp@...>

vignettes/Introduction_to_radsafer.Rmd (lines 126-128):

  ```{r echo = TRUE}
  RN_find_parent("Th-230")
```

rbacon

Maintainer: Maarten Blaauw <maarten.blaauw@...>

vignettes/priorssettings.Rmd (lines 27-29):

```{R, eval=FALSE}
Bacon('RLGH3', acc.mean=50, acc.shape=100)
````

rcdk

Maintainer: Zachary Charlop-Powers <zach.charlop.powers@...>

vignettes/using-rcdk.Rmd (lines 141-146):

 ```{r}
smiles <- c('CCC', 'c1ccccc1', 'CCc1ccccc1CC(C)(C)CC(=O)NC')
mols <- parse.smiles(smiles)
get.smiles(mols[[3]], smiles.flavors(c('UseAromaticSymbols')))
get.smiles(mols[[3]], smiles.flavors(c('Generic','CxSmiles')))
```

rSEA

Maintainer: Mitra Ebrahimpoor<mitra.ebrahimpoor@...>

vignettes/rSEA_vignette.Rmd (lines 266-276):

```{R pathlist_chunk4, eval = FALSE}
if (!requireNamespace("BiocManager", quietly = TRUE))
    install.packages("BiocManager")

BiocManager::install("org.Mm.eg.db")

library(org.Mm.eg.db)
ls("package:org.Mm.eg.db")
columns(org.Hs.eg.db)

````

sitmo

Maintainer: James Balamuta <balamut2@...>

vignettes/big_crush_test.Rmd (lines 26-3806):

```{r eval = F, engine='bash'}
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
                 Starting BigCrush
                 Version: TestU01 1.2.3
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

***********************************************************
Test smarsa_SerialOver calling smultin_MultinomialOver

....

========= Summary results of BigCrush =========

 Version:          TestU01 1.2.3
 Generator:        sitmo
 Number of statistics:  160
 Total CPU time:   03:44:06.82

 All tests were passed
 ```

sortable

Maintainer: Andrie de Vries <apdevries@...>

vignettes/novel_solutions.Rmd (lines 10-15):

  ```{r, include = FALSE}
knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>"
)
```

SPOT

Maintainer: Thomas Bartz-Beielstein <tbb@...>

vignettes/SPOTVignetteNutshell.Rmd (lines 332-359):

```{r, constraintsRun, eval = FALSE}
res <- spot(
  x = x0,
  fun = funBaBSimHospital,
  lower = a,
  upper = b,
  verbosity = 0,
  control = list(
    funEvals = 2 * funEvals,
    noise = TRUE,
....
    model =  buildKriging,
    plots = FALSE,
    progress = TRUE,
    directOpt = optimNLOPTR,
    directOptControl = list(funEvals = 0),
    eval_g_ineq = g
  )
)
print(res)
````

ssdtools

Maintainer: Joe Thorley <joe@...>

vignettes/faqs.Rmd (lines 12-17):

  ```{r, include = FALSE}
knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>"
)
```

stationery

Maintainer: Paul Johnson <pauljohn@...>

vignettes/code_chunks.Rmd (lines 753-762):

 ```{r outreg, results='asis'}
set.seed(234234)
dat <- data.frame(x1 = rnorm(100), x2 = rnorm(100), y = rnorm(100))
library(rockchalk)
m1 <- lm(y ~ x1, data = dat)
m2 <- lm(y ~ x1 + x2, data = dat)
vl <- c("x1" = "Excellent Predictor", "x2" = "Adequate Predictor")
outreg(list("First Model" = m1, "Second Model" = m2), varLabels = vl,
       tight = FALSE, type = "latex")
```

StructFDR

Maintainer: Jun Chen <chen.jun2@...>

vignettes/StructFDR.Rmd (lines 25-30):

 ```{r load_package, results="hide", message=FALSE, cache=FALSE} 
require(StructFDR)
require(ape)
require(ggplot2)
require(reshape)
```

textTinyR

Maintainer: Lampros Mouselimis <mouselimislampros@...>

vignettes/functionality_of_textTinyR_package.Rmd (lines 732-746):

```{r, eval = F, echo = T}

res_adj

5 x 9 sparse Matrix of class "dgCMatrix"
          planets           by            X       solar          and          as  ...... 
[1,] -0.005818773 -0.001939591 -0.001939591 0.004747735 -0.001939591 0.007121603  ......      
[2,] -0.006511484 -0.003255742 -0.003255742 0.003984706 -0.006511484 0.007969413  ......       
[3,] -0.006880059 -0.010320088 -0.003440029 .           -0.003440029 .            ...... 
[4,] -0.006589936 -0.006589936 -0.002196645 .           -0.008786581 0.008065430  ...... 
[5,] -0.013405997 -0.002681199 -0.002681199 0.003281523 -0.008043598 .            ...... 

````

triebeard

Maintainer: Oliver Keyes <ironholds@...>

vignettes/r_radix.Rmd (lines 27-36):

```{r, eval=FALSE}
library(triebeard)
trie <- trie(keys = c("AO", "AEO", "AAI", "AFT", "QZ", "RF"),
             values = c("Audobon", "Atlanta", "Ann Arbor", "Austin", "Queensland", "Raleigh"))

longest_match(trie = trie, to_match = labels)

 [1] "Audobon"    "Atlanta"    "Ann Arbor"  "Austin"     "Queensland" "Queensland" "Raleigh"    "Audobon"    "Austin"    
[10] "Queensland"
````

TRMF

Maintainer: Chad Hammerquist <chammerquist@...>

vignettes/TRMF-package.Rmd (lines 43-45):

  ```{r,eval=FALSE}
obj = create_TRMF(A)
```

tvem

Maintainer: John J. Dziak <dziakj1@...>

vignettes/vignette-tvem.Rmd (lines 34-36):

```{r,eval=FALSE}
install.packages("tvem")
````

vtree

Maintainer: Nick Barrowman <nbarrowman@...>

vignettes/vtree.Rmd (lines 589-591):

```{r, eval=FALSE}
tkeep=list(list(Sex="M",Severity=c("Moderate","Severe")),list(Sex=F",Severity="Mild"))
  ```

wdnr.gis

Maintainer: Paul Frater <paul.frater@...>

vignettes/wdnr.gis-intro.Rmd (lines 121-123):

 ```{r, eval = TRUE}
list_services("FM_Trout")
```

zonator

Maintainer: Joona Lehtomaki <joona.lehtomaki@...>

vignettes/zonator-project.Rmd (lines 303-305):

 ```{r print-groups}
groups(variant.caz)
```
yihui commented 2 years ago

If anyone is running into this error ("attempt to use zero-length variable name"), you can install the development version of knitr and recompile your document:

remotes::install_github('yihui/knitr')

The new error message should be much clearer. Please feel free to let me know if you still don't know how to fix it. Thanks for your patience!

github-actions[bot] commented 2 years ago

This old thread has been automatically locked. If you think you have found something related to this, please open a new issue by following the issue guide (https://yihui.org/issue/), and link to this old issue if necessary.