Location | Original | Possible correction / Problem
-- | -- | --
Chapter 1 | One is what could called […] | One is what could be called […]
Chapter 2 | In this chapter we will discuss some of these tasks including, checking model assumptions, diagnosing inference results and model comparison. | comma after "tasks" instead after "including"?
Chapter 2, description of Fig. 2.6 | In the first panel, the curve with the solid line is the KDE of the proportion of simulations of predicted values with mean values less or equal than the observed data. | Isn't just the KDE of the test statistics T(y*\|y) (= mean) plotted here?
Chapter 2 | […] model_0 specified a posterior distribution […] | "specifies", because the rest of the sentence also uses present tense
Chapter 2 | The quantity defined by Equation (2.5) (or that quantity multiplied by some constant) is usually known as the deviance, and it is use in both Bayesians and non-Bayesians contexts. | "used" instead of "use"
Chapter 3, description of Fig. 3.3 | The vertical lines are the empirical mean and standard deviation. | In the description "vertical lines" are mentioned, but not visible in the plots.
Chapter 3 | Take a moment to compare the estimate of the mean with the summary mean shows for each species in Table 3.1. | "shows" should be "shown"
Chapter 3, Sec. "Linear Regression" | We call this a linear regression because the parameters (not the covariates) enter the model in a linear fashion. | To my understanding, "coefficients", "covariates" and "parameters" were defined as synonyms three sentences before. So, this sentence does not make sense to me.
Chapter 3, Sec. "Linear Regression" | From our posterior estimate we can state that if we saw an Adelie penguin with a 0 mm flipper length we would expect the mass of this impossible penguin to somewhere between -4213 and -546 grams. | 1. The numbers -4213 and -546 are different from the HDI numbers in Fig. 3.8 left (-4151 and -510) 2. "be" is missing after "to"
Chapter 3, Sec. "Linear Regression" | For example, in our intercept per penguin model (Code Block mass_forest_plot), instead of mu = μ[species.codes] we can use pandas.get_dummies to parse the categorical information into a design matrix mu = pd.get_dummies(penguins["species"]) @ μ. where @ is a Python operator for performing matrix multiplication. | 1. Sounds like mu is the design matrix, not pd.get_dummies(penguins["species"]) 2. Comma instead of period after the last μ
Chapter 3, Sec. "Linear Regression" | In this case we will opt for a centering transformation, which takes a set a value and centers its mean value at zero as shown in Code Block flipper_centering. | […] which takes a set of values […]
Chapter 3, Sec. "Linear Regression" | In this case our estimate of has dropped a mean of 462 grams in our no covariate model defined in Code Block penguin_mass to a mean value 298 grams from the linear model defined in Code Block penguin_mass_multi that includes flipper length and sex as a covariates. | "of" missing in "a mean value 298 grams"
Chapter 3, description of Fig. 3.15 | Fig. 46 By incorporating sex as a covariate in model_penguin_mass_categorical the estimated distribution of from this model is centered around 300 grams, which lower value than estimated by our fixed mean model and our single covariate model. This figure is generated from Code Block forest_multiple_models. | "[…], which lower value than estimated […]" should be "[…], which is lower than the value estimated […]"
Chapter 3, Sec. "Multiple Linear Regression" | it is easy to condition on new predictors, which useful for counterfactual analyses. | "is" missing in front of "useful"
Chapter 3, Sec. "GLM" | We are still dealing a linear model here […] | "with" missing in front of "a linear model"
Chapter 3, Sec. "GLM" | In our classifying penguins example we find it reasonable to equally expect a Gentoo penguin […] | "Chinstrap" instead of "Gentoo" (see last model)
Chapter 3, Sec. "GLM" | […] and Fig. 3.22.A separation […] | Missing space before "A separation "
Chapter 3, Sec. "GLM" | […] if we were to pick a random penguin from Adelie or Chinstrap penguinsthe probability that […] | Missing space in "penguinsthe"
Chapter 3, Sec. "Picking Priors in Regression" | Given these choices we can write our model in Code Block uninformative_prior_sex_ratio) […] | unnecessary ")"
Chapter 3, Sec. "Picking Priors in Regression" | This is not a fully uninformative priors […] | plural/singular
Chapter 3, Notebook , Code for Figure 3.24 | # Take 10 sample from posterior num_samples = 50 | Comment says 10 samples from posterior, but drawn are 50 samples from prior
Chapter 3, Notebook , Code for Figure 3.24 | # Take 10 sample from posterior num_samples = 50 | Comment says 10 samples from posterior, but drawn are 50 samples from posterior
Chapter 3, Sec. "Picking Priors in Regression" | Plotting our posterior samples the concentration of coefficients is smaller and the plotted posterior lines fall into bounds that more reasonable when considering possible ratios. | "are" missing after "bounds that"
Chapter 3, E7 | Note that there are divergence with the original parameterization. | divergence should be plural
<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:x="urn:schemas-microsoft-com:office:excel" xmlns="http://www.w3.org/TR/REC-html40">
Location | Original | Possible correction / Problem -- | -- | -- Chapter 1 | One is what could called […] | One is what could be called […] Chapter 2 | In this chapter we will discuss some of these tasks including, checking model assumptions, diagnosing inference results and model comparison. | comma after "tasks" instead after "including"? Chapter 2, description of Fig. 2.6 | In the first panel, the curve with the solid line is the KDE of the proportion of simulations of predicted values with mean values less or equal than the observed data. | Isn't just the KDE of the test statistics T(y*\|y) (= mean) plotted here? Chapter 2 | […] model_0 specified a posterior distribution […] | "specifies", because the rest of the sentence also uses present tense Chapter 2 | The quantity defined by Equation (2.5) (or that quantity multiplied by some constant) is usually known as the deviance, and it is use in both Bayesians and non-Bayesians contexts. | "used" instead of "use" Chapter 3, description of Fig. 3.3 | The vertical lines are the empirical mean and standard deviation. | In the description "vertical lines" are mentioned, but not visible in the plots. Chapter 3 | Take a moment to compare the estimate of the mean with the summary mean shows for each species in Table 3.1. | "shows" should be "shown" Chapter 3, Sec. "Linear Regression" | We call this a linear regression because the parameters (not the covariates) enter the model in a linear fashion. | To my understanding, "coefficients", "covariates" and "parameters" were defined as synonyms three sentences before. So, this sentence does not make sense to me. Chapter 3, Sec. "Linear Regression" | From our posterior estimate we can state that if we saw an Adelie penguin with a 0 mm flipper length we would expect the mass of this impossible penguin to somewhere between -4213 and -546 grams. | 1. The numbers -4213 and -546 are different from the HDI numbers in Fig. 3.8 left (-4151 and -510) 2. "be" is missing after "to" Chapter 3, Sec. "Linear Regression" | For example, in our intercept per penguin model (Code Block mass_forest_plot), instead of mu = μ[species.codes] we can use pandas.get_dummies to parse the categorical information into a design matrix mu = pd.get_dummies(penguins["species"]) @ μ. where @ is a Python operator for performing matrix multiplication. | 1. Sounds like mu is the design matrix, not pd.get_dummies(penguins["species"]) 2. Comma instead of period after the last μ Chapter 3, Sec. "Linear Regression" | In this case we will opt for a centering transformation, which takes a set a value and centers its mean value at zero as shown in Code Block flipper_centering. | […] which takes a set of values […] Chapter 3, Sec. "Linear Regression" | In this case our estimate of has dropped a mean of 462 grams in our no covariate model defined in Code Block penguin_mass to a mean value 298 grams from the linear model defined in Code Block penguin_mass_multi that includes flipper length and sex as a covariates. | "of" missing in "a mean value 298 grams" Chapter 3, description of Fig. 3.15 | Fig. 46 By incorporating sex as a covariate in model_penguin_mass_categorical the estimated distribution of from this model is centered around 300 grams, which lower value than estimated by our fixed mean model and our single covariate model. This figure is generated from Code Block forest_multiple_models. | "[…], which lower value than estimated […]" should be "[…], which is lower than the value estimated […]" Chapter 3, Sec. "Multiple Linear Regression" | it is easy to condition on new predictors, which useful for counterfactual analyses. | "is" missing in front of "useful" Chapter 3, Sec. "GLM" | We are still dealing a linear model here […] | "with" missing in front of "a linear model" Chapter 3, Sec. "GLM" | In our classifying penguins example we find it reasonable to equally expect a Gentoo penguin […] | "Chinstrap" instead of "Gentoo" (see last model) Chapter 3, Sec. "GLM" | […] and Fig. 3.22.A separation […] | Missing space before "A separation " Chapter 3, Sec. "GLM" | […] if we were to pick a random penguin from Adelie or Chinstrap penguinsthe probability that […] | Missing space in "penguinsthe" Chapter 3, Sec. "Picking Priors in Regression" | Given these choices we can write our model in Code Block uninformative_prior_sex_ratio) […] | unnecessary ")" Chapter 3, Sec. "Picking Priors in Regression" | This is not a fully uninformative priors […] | plural/singular Chapter 3, Notebook , Code for Figure 3.24 | # Take 10 sample from posterior num_samples = 50 | Comment says 10 samples from posterior, but drawn are 50 samples from prior Chapter 3, Notebook , Code for Figure 3.24 | # Take 10 sample from posterior num_samples = 50 | Comment says 10 samples from posterior, but drawn are 50 samples from posterior Chapter 3, Sec. "Picking Priors in Regression" | Plotting our posterior samples the concentration of coefficients is smaller and the plotted posterior lines fall into bounds that more reasonable when considering possible ratios. | "are" missing after "bounds that" Chapter 3, E7 | Note that there are divergence with the original parameterization. | divergence should be plural