I assume that this is based on Gelman's 2019 article in The American Satistician R-squared for Bayesian Regression Models. From that paper, I believe it is clear that the intended formula is:
That is, rather than take the variance over samples of the residuals for each point (and presumably then take the expectation of this variance over data points, although this isn't in the equation), you should take the variance over the residuals over data points, and then take the expectation of this over samples.
Equation 3.10 currently reads:
$$ R^2 = \frac{V^N{n=1} E[\hat{y}^s]}{V^N{n=1} E[\hat{y}^s] + V^S_{n=1}(\hat{y}^s - y)} $$
I assume that this is based on Gelman's 2019 article in The American Satistician R-squared for Bayesian Regression Models. From that paper, I believe it is clear that the intended formula is:
$$ R^2 = \frac{V^N{n=1} E[\hat{y}^s]}{V^N{n=1} E[\hat{y}^s] + E[V^N_{n=1}(\hat{y}^s - y)]} $$
That is, rather than take the variance over samples of the residuals for each point (and presumably then take the expectation of this variance over data points, although this isn't in the equation), you should take the variance over the residuals over data points, and then take the expectation of this over samples.
Thanks and thanks for the book! Opher