At first it looks like the PCA is performed for multi-dimensional-scaling to represent topics in 2 dimensions - this is the conclusion that comes up from the axis labels.
But when one looks closer to the default jsPCA function, it looks like the reduction is made on dissimilarity matrix (not on the regular dataset) and what is more, the cmdscale function (used in jsPCA) is used to perform dimension reduction and not the prcomp .
How this is relevant to PCA on axis labels?
Shouldn't you write MDS1 and MDS2 instead of PCA1 and PCA2?
If somehow cmdscale perform semi-similiar operations to PCA computations, is it possible to calculate eigen value for every component and to present the percentage of explained variance on axises next to PCAx text?
At first it looks like the PCA is performed for multi-dimensional-scaling to represent topics in 2 dimensions - this is the conclusion that comes up from the axis labels.
But when one looks closer to the default
jsPCA
function, it looks like the reduction is made on dissimilarity matrix (not on the regular dataset) and what is more, thecmdscale
function (used injsPCA
) is used to perform dimension reduction and not theprcomp
.MDS1
andMDS2
instead ofPCA1
andPCA2
?cmdscale
perform semi-similiar operations to PCA computations, is it possible to calculate eigen value for every component and to present thepercentage of explained variance
on axises next toPCAx
text?