scientific-python / summit-2023

Work summit 2023
1 stars 0 forks source link

Package metrics & stats #17

Open jpivarski opened 1 year ago

jpivarski commented 1 year ago

From the Feb 27 meeting: "How do we collect metrics and package stats?"

pllim commented 1 year ago

Is this related to #12 or something different?

jpivarski commented 1 year ago

Different, I think (unless DevStats would be nested within this one). I'm putting up new topics that were brought up in the meeting 2 hours ago. I haven't copied over all the points mentioned and people interested, yet.

lagru commented 1 year ago

I would like to point out this blog post: Measuring API usage for popular numerical and scientific libraries . Perhaps the results could be updated or even improved during the summit.

Thanks to @jni for pointing it out to me. :pray:

(Edit: Fixed the link :sweat_smile:)

jjerphan commented 1 year ago

Thanks for pointing this out, @lagru. Did you want to share this link instead? :slightly_smiling_face:

jpivarski commented 1 year ago

Wow! This is exactly what I'm working on for a physics conference, and I was planning on following up on these techniques at the Scientific Python Summit. I just didn't know that Christopher Ostrouchov has already done it, talked about it at SciPy 2019, and provided a tool.

Christopher has already addressed this problem:

def foobar(array):
    return array.transpose()

a = numpy.array(...)

a.transpose()
foobar(a)

and I'll look at his code to see how he did it or use that code directly.

On the tool's GitHub page, he notes

NOTE: this dataset is currently extremely biased as we are parsing the top 4,000 repositories for few scientific libraries in data/whitelist. This is not a representative sample of the python ecosystem nor the entire scientific python ecosystem. Further work is needed to make this dataset less biased.

In my case, I've been asking these questions about a specific sub-community, nuclear and high-energy physicists, and I have a trick for that (PDF page 29 of this talk): one major experiment, CMS, requires its users to fork a particular GitHub repo. From that, I can get a set of GitHub users who are all CMS physicists, and (where I wave my hands) I assume that the CMS experiment is representative of the whole field. This is 2847 GitHub users (CMS members over a 10 year timespan) and 22961 non-fork repositories.

I also have another technique I've been trying out: using the GitHub archive in BigQuery to find a set of GitHub users who have ever commented on the ROOT project, which occupies a central place in our ecosystem. Then I would look up their non-fork repos in the same way.

But Christopher has solved a lot of the other issues, and I'm going to use as much of his work, with credit, as I can. Thanks for the pointer!

pllim commented 1 year ago

Re: https://github.com/scientific-python/summit-2023/issues/17#issuecomment-1500248563

@lagru , being able to see an updated stats of https://labs.quansight.org/blog/python-library-function-usage (thanks for the correct link, @jjerphan) and even compare the different years would be nice. 😸

I wonder if there is any big changes caused by, say, a pandemic. 💭

jpivarski commented 1 year ago

Absolutely. Look at this:

It's a Google Trends search that I use to see how "data analysis" is associated with Java, R, and Python (Python overtook R's dominance) and "machine learning" (Python has always been dominant in the modern ML era). I've been making this plot for several years, starting before the pandemic, and look at that gap!

Interestingly, the pandemic affected Google searches for Python much more than R. My hypothesis for this is that Python has a higher industry/academic ratio than R, and that industry data analysis jobs were more affected by the pandemic than academic. I don't have anything quantitative backing up that interpretation.

jpivarski commented 1 year ago

Oh, but you were asking about it in the context of python-library-function-usage, not just any metric.

I'd be a little surprised if the pandemic changed how people use APIs. It would surely change absolute rates, such as the Google searches, but given that someone is using e.g. NumPy, their fraction of np.array versus np.matrix calls wouldn't change much, would it?

For my part, I usually do plots in a time domain. One of the specific questions I'll be asking about ROOT/physics usage is how often people use TLorentzVector (deprecated in 2005, but still widely used) versus PxPyPzEVector (and its other replacements). That will definitely be a time-based plot. I'd want to see if there's any trend away from the legacy class. If there isn't, I think it would be a lesson that deprecation without consequences (never actually removing it) doesn't change user behavior.

betatim commented 1 year ago

(commenting because I can't assign myself to this issue)

lwasser commented 1 year ago

i am super interested in this. i started a small module (that needs a lot of work) to parse our packages and get some basic stats via the github api. we of course have a very specific use case with reviews and such. but if there were some tooling around getting other types of stats and storing i might pull that into our workflow rather than continue to develop that component myself! i was thinking it would be super cool to have a page of stats for each package in our ecosystem. think snyk stats but w a bit more depth potentially?

here is a quick snapshot of what i'm bringing down (statically) ... no time series right now (which would be super cool).

tupui commented 1 year ago

You might like this https://github.com/nschloe/github-trends

tacaswell commented 1 year ago

https://www.coiled.io/blog/how-popular-is-matplotlib seems on-topic for this as well. We are waiting to find out if we got a SDG to extend this work.

While it is looking from mpl specifically, I think it is a good proxy for general adoption.

lwasser commented 1 year ago

https://github.com/nschloe/github-trends

wow that is a really great repo @tupui !! also look at the mpl growth over time @tacaswell ! we'd much rather adopt something that others are using vs build something ourselves. super excited for this discussion in may!

Carreau commented 1 year ago

https://www.coiled.io/blog/how-popular-is-matplotlib seems on-topic for this as well. We are waiting to find out if we got a SDG to extend this work.

FYI, napari is now I believe also including a watermark in the image they generate.

stefanv commented 1 year ago

i am super interested in this. i started a small module (that needs a lot of work) to parse our packages and get some basic stats via the github api. we of course have a very specific use case with reviews and such. but if there were some tooling around getting other types of stats and storing i might pull that into our workflow rather than continue to develop that component myself! i was thinking it would be super cool to have a page of stats for each package in our ecosystem. think snyk stats but w a bit more depth potentially?

@lwasser I was wondering how this tool differs from the data gathering done in the devstats. Is this something we can combine efforts on?

lwasser commented 1 year ago

@stefanv i'd LOVE to combine efforts. i can show you what we have. some of what i'm parsing are github issues to get package names, reviews, etc. but other stuff i'm parsing to get stars and other metrics that i bet you are parsing for as well. What can i create to make this potential collab more efficient? we got some people working on this for us during our last sprints as well! but at the end of the day it's really just me working on this by myself to support tracking reviews, packages etc...

stefanv commented 1 year ago

@stefanv i'd LOVE to combine efforts. i can show you what we have. some of what i'm parsing are github issues to get package names, reviews, etc. but other stuff i'm parsing to get stars and other metrics that i bet you are parsing for as well. What can i create to make this potential collab more efficient? we got some people working on this for us during our last sprints as well! but at the end of the day it's really just me working on this by myself to support tracking reviews, packages etc...

Great! There's a bit of machinery around GraphQL paging that is service specific (crazy, but so it is); so perhaps we can aggregate that into a "package" (submodule), and then just feed the package with the queries we want, built from GitHub GraphQL explorer. Later, we can add bells & whistles like caching, exporting in different formats, etc.

lwasser commented 1 year ago

cool. i'll spend a bit more time documenting what ours does and what we need. I need to do that anyone as i should have created a design from the start 😆 and i didn't. i just started writing stuff that did what I needed 🙃

We output to YAML right now but have no long term storage which i'd love to look at trends over time.

i've just been making REST API calls. and have hit rate limits but that may have been fixed in our last sprint. i'm happy to wrap around / use devstats as it makes sense and contribute effort there.

juanis2112 commented 1 year ago

Hackmd for the summit: https://hackmd.io/UNwG2BjJSxOUJ0M1iWI-nQ