A dataset with normalized mentions, and a way to count mentions.
The issue
We need to count the mentions of software across the whole dataset as agreed in #2, then cut the list down to a length of n.
If the goal is to find the, say, 10 most popular packages for which the source code is publicly available (to make the dataset actually useful), we need a large enough seed sample, e.g., of 60 packages, for which we can then check public availability.
What do we really need?
[ ] A decision on how many packages we actually want to enrich with more useful metadata (i.e., the "final" dataset)
[ ] A decision on how large the seed sample should be (i.e., how many packages can each of us check within the constraints of the hack day)
[ ] The actual list
How can we achieve this?
Make the decisions, then do the counting :).
Potentially in the same Jupyter Notebook if we decide to use one for #3?
What do we have?
A dataset with normalized mentions, and a way to count mentions.
The issue
We need to count the mentions of software across the whole dataset as agreed in #2, then cut the list down to a length of n. If the goal is to find the, say, 10 most popular packages for which the source code is publicly available (to make the dataset actually useful), we need a large enough seed sample, e.g., of 60 packages, for which we can then check public availability.
What do we really need?
How can we achieve this?
Make the decisions, then do the counting :). Potentially in the same Jupyter Notebook if we decide to use one for #3?