Closed bob-carpenter closed 10 years ago
"Finding the solution to your own problem when explaining it to someone else happens so frquently in software development"
"where m(x) is an N-vector and and k(x) is an N × N covariance matrix"
"The addition of $\sigma^2$ on the diagonal is import to"
Rick Farouni asked on stan-dev how to construct a Cholesky factor with a diagonal of ones for "hierarchical" structural constraint for the loadings matrix used in Bayesian factor analysis in:
Michael explained on the list that you use the transformed parameters to do this directly by taking the unconstrained below-diagonal elements as the parameters and then filling in a transformed parameter. Further mention that if you want a prior on the covar matrix that you'll need to add in the Jacobian manually.
positive_ordered
and ordered
vectorsI believe, but need to verify, that a simple i.i.d. normal on an ordered vector produces the same result as putting the same normal prior on an unconstrained vector, then sorting.
In general, we could also put a prior on the first element and then some positive-support prior on the differences.
I responded to a stan-users group query from Sam Weiss:
There are two ways to do this. Both involve adding new data variables for the number of items to predict/forecast and any inputs necessary to do that prediction. Then you can either:
Thanks for dropping into the right issue. More below.
On Jul 2, 2014, at 4:56 PM, jonathan-g notifications@github.com wrote:
I just noticed a minor layout flaw on p. 119 (Fig. 16.1): the text of the figure caption overlaps the x-axis labels of the figures (\lambda _1, \lambda_1 and \mu), making the axis labels hard to read.
I'll fix that.
Also, it would be useful to add a cross-reference to section 11.2 suggesting that the reader consult section 16.1 for further information about label switching and multimodality.
Will do.
Finally, the transition from section 11.2 to 11.3 is confusing. Section 11.2 says that it's "pretty much impossible to perform full Bayesian analysis for clustering models" and recommends using EM or variational approaches instead, but then section 11.3 jumps right into Naive Bayes without saying anything about why this doesn't contradict 11.2. More knowledgeable people may get this, but I found it very confusing.
I'll try to break out the discussion of classification (no issue with label switching) from the clustering (all the usual issues).
I tried to push them together initially to show that it's the "same model" in the BUGS sense.
Just a sentence to say, "Now we're changing gears from clustering to classification..." would probably suffice.
Avi Feller reported an error in the current formulation. There's a new function that's even a better fix.
Sigma_beta <- quad_form_diag(Sigma,tau);
quad_form_diag(Sigma,tau) =def= diag_matrix(tau)' * Sigma * diag_matrix(tau)
From Herra Huu on stan-users: The manual p. 202: The base variable types are integer, real, vector, row_vector, and matrix.
To help you start, I fixed the typos I pointed out above and some others. They are in branch feature/issue-668-next-manual.
That reminded me
This is the topic for comments about changes to the next version of the manual.