To scrape PyPI download counts, we use https://api.pepy.tech/. However, that API now requires an API key for all requests to it, rendering our scraper unusable (and generating a lot of errors). We could either
Pay for the API. However, the website has very little information about itself, so you have no idea who you're doing business with.
Drop the scraper.
Query the public Google Big Query dataset to get the download counts. However, only 1TB of queries is free each month, and a single query (i.e. for a single PyPI package) already uses several gigabytes of data, so we'd have to severely limit the amount we can scrape each month.
Would downloading the dataset be an option? If so we could analyze it ourselves. It would be good to do this on free infrastructure though, like DAS6 or so.
To scrape PyPI download counts, we use https://api.pepy.tech/. However, that API now requires an API key for all requests to it, rendering our scraper unusable (and generating a lot of errors). We could either
See also https://packaging.python.org/en/latest/guides/analyzing-pypi-package-downloads/