A similar problem appears in the Measure section of the methodology, right from the start. The text describing the measurements must also tell us what is being measured. Instead, we get “The measures presented in Figs. 2A-C are averaged over 1 all beliefs” with no description of what we should look at those figures. While the figures do include some of the details that are not found in the text, every measure should be properly defined in the text, including, as applicable, a possibly very brief but needed explanation on how those measures are calculated.
Now, if they'd read past the first sentence, they'd have seen that the rest of the section is a very clear description of what was measured. still, can be clearer. Now calling out measures by figure panel:
In \textbf{Fig. \ref{fig:sim1:A}}, the measure of adopters is the number of individuals with each belief in their knowledge graph, averaged over all beliefs, divided by the total population. Similarly, the measure of the susceptible population (and all discussion of the susceptible population) represents the fraction of individuals who would adopt each belief if exposed to it according to the appropriate decision rule for independent vs. interdependent diffusion, plus the fraction that has already adopted the belief.
\textbf{Fig. \ref{fig:sim1:B}} shows the Pearson correlation between the number of people who have adopted each belief at time $t$ and the number who were initially susceptible to the belief at $t=0$ but did not start with it. As this has no meaningful value at $t=0$, the curve is drawn from t1-t9.
\textbf{Fig. \ref{fig:sim1:C}} assesses the correlation between the number of individuals who have adopted a belief (knowledge graph edge) and the number who have adopted the most popular belief it shares a 'node' with, averaged over all beliefs.
\textbf{Fig. \ref{fig:sim1:D}} uses the clustering coefficient of a knowledge graph constructed from the most popular 10\% of beliefs as a demonstration that the most popular beliefs are mutually interrelated, and not merely all related to a single leading belief (e.g. a star or barbell pattern). Clustering only makes sense when beliefs are conceptualized as a knowledge graph. Other conceptualizations of belief interaction might prefer to plot the number of top decile beliefs that each top decile belief interacts with. This measure gives essentially the same result (i.e. large fractional growth over time in the interdependent case, with no change from randomness in the independent case) but fails to capture the mutual inter-relatedness indicated by the clustering coefficient. The measure is generally insensitive to the specific threshold used to define a ‘popular’ belief for any thresholds between about 5\% and 40\%. See the supplement for sensitivity analysis.
R2 writes:
Now, if they'd read past the first sentence, they'd have seen that the rest of the section is a very clear description of what was measured. still, can be clearer. Now calling out measures by figure panel: